404 points by walterbell 1 day ago | 213 comments
Aurornis 21 hours ago
Assuming these professionals were great critical thinkers until the AI came along and changed that is a big stretch.
In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources. LLMs just came along and offered them opinions on demand that they could confidently repeat.
> The scary part is that many users still believed they were thinking critically, because GenAI made them feel smart
I don’t see much difference between this and someone who devours TikTok videos on a subject until they feel like an expert. Same pattern, different sources. The people who outsource their thinking and collect opinions they want to hear just have an easier way to skip straight to the conclusions they want now.
karaterobot 20 hours ago
He's talking specifically about OSINT analysts. Are you saying these people were outsourcing their thinking to podcasts, etc. before AI came along? I have not heard anyone make that claim before.
potato3732842 9 hours ago
These people absolutely were reading Reddit comments from a year ago to help them parse unfamiliar jargon in some document they found or make sense of what's going on in an image or whatever.
jerf 9 hours ago
The LLM-only AI just hands you a fully-formed opinion with always-plausible-sounding reasons. There's no cognitive prompt to make you consider if it's wrong. I'm actually deliberately cultivating an instinctive negative distrust of LLM-only AI and would suggest it to other people because even though it may be too critical on a percentage basis, you need it as a cognitive hack to remember that you need to check everything coming out of them... not because they are never right but precisely because they are often right, but nowhere near 100% right! If they were always wrong we wouldn't have this problem, and if they were just reliably 99.9999% right we wouldn't have this problem, but right now they sit in that maximum danger zone of correctness where they're right enough that we cognitively relax after a while, but they're nowhere near right enough for that to be OK on any level.
potato3732842 8 hours ago
I've encountered enough instances in subjects I am familiar with where the "I'm 14 and I just googled it for you" solution that's right 51% of the time and dangerously wrong the other 49 is highly upvoted and the "so I've been here before and this is kind of nuanced with a lot of moving pieced, you'll need to understand the following X, the general gist of Y is..." type take that's more correct is highly downvoted that I feel justified in making the "safe" assumption that this is how all subjects work.
On one hand at least Reddit shows you the downvoted comment if you look and you can go independently verify what they have to say.
But on the other hand the LLM is instant and won't screech at you if you ask it to cite sources.
iszomer 5 hours ago
throwaway29812 8 hours ago
low_tech_love 16 hours ago
torginus 12 hours ago
jart 20 hours ago
It's always more comfortable for people to blame the thing rather than the person.
InitialLastName 20 hours ago
jart 20 hours ago
ZYbCRq22HbJ2y7 18 hours ago
Screaming "no one is evil, its just markets!" probably helps people who base their lives on exploiting the weak sleep better at night.
jart 17 hours ago
ZYbCRq22HbJ2y7 17 hours ago
You can look to the prohibition period for historical analogies with alcohol, plenty of enterprising humans there.
harperlee 11 hours ago
potato3732842 9 hours ago
Basically it's a response to regulatory reality, little different from soy wire insulation in automobiles. I'm sure they'd love to deliver pure opium and wire rodents don't like to eat but that's just not possible while remaining in the black.
collingreen 6 hours ago
PeeMcGee 16 hours ago
friendzis 13 hours ago
jplusequalt 6 hours ago
Malaise exists at an individual level, but it doesn't transform into social malaise until someone comes in to exploit those people's addictions for profit.
itishappy 8 hours ago
Animats 15 hours ago
Here's the CIA's perspective on this subject.[1] The US intelligence community has a generative AI system to help analyze open source intelligence. It's called OSIRIS.[2] There are some other articles about it. The previous head of the CIA said the main use so far is summarization.
The original OSINT operation in the US was the Foreign Broadcast Monitoring Service from WWII. All through the Cold War, someone had to listen to Radio Albania just in case somebody said something important. The CIA ran that for decades. Its descendant is the current open source intelligence organization. Before the World Wide Web, they used to publish some of the summaries on paper, but as people got more serious about copyright, that stopped.
DoD used to publish The Early Bird, a daily newsletter for people in DoD. It was just reprints of articles from newspapers, chosen for stories senior leaders in DoD would need to know about. It wasn't supposed to be distributed outside DoD for copyright reasons, but it wasn't hard to get.
[1] https://www.cia.gov/resources/csi/static/d6fd3fa9ce19f1abf2b...
[2] https://apnews.com/article/us-intelligence-services-ai-model...
D_Alex 14 hours ago
Sometimes this is just sloppy methodology. Other times it is intentional.
dughnut 9 hours ago
B1FF_PSUVM 11 hours ago
... or just to know what they seem to be thinking, which is also important.
euroderf 8 hours ago
jruohonen 1 day ago
• Instead of forming hypotheses, users asked the AI for ideas.
• Instead of validating sources, they assumed the AI had already done so.
• Instead of assessing multiple perspectives, they integrated and edited the AI’s summary and moved on.
This isn’t hypothetical. This is happening now, in real-world workflows.
"""
Amen, and OSINT is hardly unique in this respect.
And implicitly related, philosophically:
johnnyanmac 12 hours ago
Yes, thars a part of why AI has its bad rep. It has uses to streamline workflow but people are treating it like an oracle. When it very very very clearly is not.
Worse yet, people are just being lazy with it. It's the equi talent of googling a topic and pasting the lede of the Wikipedia article. Which is tasteless, but still likely to be more right than an unfiltered LLM output
cmiles74 24 hours ago
mr_toad 12 hours ago
gneuron 22 hours ago
sanarothe 6 hours ago
What Dutch OSINT Guy was saying here resonates with me for sure - the act of taking a blurry image into the photo editing software, the use of the manipulation tools, there seems to be something about those little acts that are an essential piece of thinking through a problem.
I'm making a process flow map for the manufacturing line we're standing up for a new product. I already have a process flow from the contract manufacturer but that's only helpful as reference. To understand the process, I gotta spend the time writing out the subassemblies in Visio, putting little reference pictures of the drawings next to the block, putting the care into linking the connections and putting things in order.
Ideas and questions seem to come out from those little spaces. Maybe it's just letting our subconscious a chance to speak finally hah.
L.M. Sacasas writes a lot about this from a 'spirit' point of view on [The Convivial Society](https://theconvivialsociety.substack.com/) - that the little moments of rote work - putting the dishes away, weeding the garden, the walking of the dog, these are all essential part of life. Taking care of the mundane is living, and we must attend to them with care and gratitude.