remix logo

Hacker Remix

The slow collapse of critical thinking in OSINT due to AI

404 points by walterbell 1 day ago | 213 comments

Aurornis 21 hours ago

> Participants weren’t lazy. They were experienced professionals.

Assuming these professionals were great critical thinkers until the AI came along and changed that is a big stretch.

In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources. LLMs just came along and offered them opinions on demand that they could confidently repeat.

> The scary part is that many users still believed they were thinking critically, because GenAI made them feel smart

I don’t see much difference between this and someone who devours TikTok videos on a subject until they feel like an expert. Same pattern, different sources. The people who outsource their thinking and collect opinions they want to hear just have an easier way to skip straight to the conclusions they want now.

karaterobot 20 hours ago

> In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources

He's talking specifically about OSINT analysts. Are you saying these people were outsourcing their thinking to podcasts, etc. before AI came along? I have not heard anyone make that claim before.

potato3732842 9 hours ago

Having a surface level understanding of what you're looking at is a huge part of OSINT.

These people absolutely were reading Reddit comments from a year ago to help them parse unfamiliar jargon in some document they found or make sense of what's going on in an image or whatever.

jerf 9 hours ago

At least if you're on reddit you've got a good chance of Cunningham's Law[1] giving you a chance at realizing it's not cut and dry. In this case, I refer to what you might call a reduced-strength version of Cunningham's Law, which I would phrase as "The best way to get the right answer on the Internet is not to ask a question; it's to post what someone somewhere thinks is the wrong answer." my added strength reduction in italics. At least if you stumble into a conversation where people are arguing it is hard to avoid needing to apply some critical thought to the situation to parse out who is correct.

The LLM-only AI just hands you a fully-formed opinion with always-plausible-sounding reasons. There's no cognitive prompt to make you consider if it's wrong. I'm actually deliberately cultivating an instinctive negative distrust of LLM-only AI and would suggest it to other people because even though it may be too critical on a percentage basis, you need it as a cognitive hack to remember that you need to check everything coming out of them... not because they are never right but precisely because they are often right, but nowhere near 100% right! If they were always wrong we wouldn't have this problem, and if they were just reliably 99.9999% right we wouldn't have this problem, but right now they sit in that maximum danger zone of correctness where they're right enough that we cognitively relax after a while, but they're nowhere near right enough for that to be OK on any level.

[1]: https://en.wikipedia.org/wiki/Ward_Cunningham#Law

potato3732842 8 hours ago

What you're describing for Reddit is farcically charitable except in cases where you could just google it yourself. What you're describing for the LLM is what Reddit does when any judgement is involved.

I've encountered enough instances in subjects I am familiar with where the "I'm 14 and I just googled it for you" solution that's right 51% of the time and dangerously wrong the other 49 is highly upvoted and the "so I've been here before and this is kind of nuanced with a lot of moving pieced, you'll need to understand the following X, the general gist of Y is..." type take that's more correct is highly downvoted that I feel justified in making the "safe" assumption that this is how all subjects work.

On one hand at least Reddit shows you the downvoted comment if you look and you can go independently verify what they have to say.

But on the other hand the LLM is instant and won't screech at you if you ask it to cite sources.

iszomer 5 hours ago

That is why it is ideal to ask it double-sided questions to test its biases as well as your own. Simply googling it is not enough when most people don't think to customize their search anyway, compounded by the fact that indexed sources may have changed or have been deprecated over time.

throwaway29812 8 hours ago

[dead]

low_tech_love 16 hours ago

The pull is too strong, especially when you factor in the fact that (a) the competition is doing it and (b) the recipients of such outcomes (reports, etc) are not strict enough to care whether AI was used or not. In this situation, no matter how smart you are, not using the new tool of the trade would be basically career suicide.

torginus 12 hours ago

And these people in positions of 'responsibility' always need someone or something to point to when shit goes sideways so they might as well.

jart 20 hours ago

Yeah it's similar to how Facebook is blamed for social malaise. Or how alcohol was blamed before that.

It's always more comfortable for people to blame the thing rather than the person.

InitialLastName 20 hours ago

More than one thing can be causing problems in a society, and enterprising humans of lesser scruples have a long history of preying on the weaknesses of others for profit.

jart 20 hours ago

Enterprising humans have a long history of giving people what they desire, while refraining from judging what's best for them.

ZYbCRq22HbJ2y7 18 hours ago

Ah yeah, fentanyl drug adulterers, what great benefactors of society.

Screaming "no one is evil, its just markets!" probably helps people who base their lives on exploiting the weak sleep better at night.

https://en.wikipedia.org/wiki/Common_good

jart 17 hours ago

No one desires adulterated fentanyl.

ZYbCRq22HbJ2y7 17 hours ago

No one has desire for adulteration, but they have a desire for an opiate high, and are willing to accept adulteration as a side effect.

You can look to the prohibition period for historical analogies with alcohol, plenty of enterprising humans there.

harperlee 11 hours ago

Fentanyl adulterators, market creators and resellers certainly do, for higher margin selling and/or increased volume.

potato3732842 9 hours ago

The traffickers looking to pack more punch into each shipment that the government fails to intercept do.

Basically it's a response to regulatory reality, little different from soy wire insulation in automobiles. I'm sure they'd love to deliver pure opium and wire rodents don't like to eat but that's just not possible while remaining in the black.

collingreen 6 hours ago

This is fine statement on its own but a gross reply to the parent.

PeeMcGee 16 hours ago

I like the facebook comparison, but the difference is you don't have to use facebook to make money and survive. When the thing is a giant noisemaker crapping out trash that screws up everyone else's work (and thus their livelihood), it becomes a lot more than just some nuisance you can brush away.

friendzis 13 hours ago

If you are in the news business you basically have to.

jplusequalt 6 hours ago

Marketing has a powerful effect. Look at how the decrease in smoking coincided with the decrease in smoking advertisement (and now look at the uptick in vaping due to the marketing as a replacement for smoking).

Malaise exists at an individual level, but it doesn't transform into social malaise until someone comes in to exploit those people's addictions for profit.

itishappy 8 hours ago

I think humans actually tend to prefer blaming individuals rather than addressing societal harms, but they're not in any way mutually exclusive.

Animats 15 hours ago

The big problem in open source intelligence is not in-depth analysis. It's finding something worth looking at in a flood of info.

Here's the CIA's perspective on this subject.[1] The US intelligence community has a generative AI system to help analyze open source intelligence. It's called OSIRIS.[2] There are some other articles about it. The previous head of the CIA said the main use so far is summarization.

The original OSINT operation in the US was the Foreign Broadcast Monitoring Service from WWII. All through the Cold War, someone had to listen to Radio Albania just in case somebody said something important. The CIA ran that for decades. Its descendant is the current open source intelligence organization. Before the World Wide Web, they used to publish some of the summaries on paper, but as people got more serious about copyright, that stopped.

DoD used to publish The Early Bird, a daily newsletter for people in DoD. It was just reprints of articles from newspapers, chosen for stories senior leaders in DoD would need to know about. It wasn't supposed to be distributed outside DoD for copyright reasons, but it wasn't hard to get.

[1] https://www.cia.gov/resources/csi/static/d6fd3fa9ce19f1abf2b...

[2] https://apnews.com/article/us-intelligence-services-ai-model...

D_Alex 14 hours ago

The really big problem in open source intelligence has been for some time that data to support just about anything can be found. OSINT investigations start with a premise, look for data that supports the premise and rarely look for data that contradicts it.

Sometimes this is just sloppy methodology. Other times it is intentional.

dughnut 9 hours ago

I think OSINT makes it sound like a serious military operation, but I think political opposition research is a much more accurate term for this sort of thing.

B1FF_PSUVM 11 hours ago

> listen to Radio Albania just in case somebody said something important

... or just to know what they seem to be thinking, which is also important.

euroderf 8 hours ago

I got Radio Tirana once (1990-ish) on my shortwave. The program informed me something to the effect that that Albania is often known as the Switzerland of the Balkans because of its crystal-clear mountain lakes.

jruohonen 1 day ago

"""

• Instead of forming hypotheses, users asked the AI for ideas.

• Instead of validating sources, they assumed the AI had already done so.

• Instead of assessing multiple perspectives, they integrated and edited the AI’s summary and moved on.

This isn’t hypothetical. This is happening now, in real-world workflows.

"""

Amen, and OSINT is hardly unique in this respect.

And implicitly related, philosophically:

https://news.ycombinator.com/item?id=43561654

johnnyanmac 12 hours ago

>This isn’t hypothetical. This is happening now, in real-world workflows.

Yes, thars a part of why AI has its bad rep. It has uses to streamline workflow but people are treating it like an oracle. When it very very very clearly is not.

Worse yet, people are just being lazy with it. It's the equi talent of googling a topic and pasting the lede of the Wikipedia article. Which is tasteless, but still likely to be more right than an unfiltered LLM output

cmiles74 24 hours ago

Anyone using these tools would do well to take this article to heart.

mr_toad 12 hours ago

I think there’s a lot of people who use these tools because they don’t like to read.

gneuron 22 hours ago

Reads like it was written by AI.

sanarothe 6 hours ago

I think there's something about the physical acts and moments of writing out or typing out the words, or doing the analysis, etc. Writing 'our', backspacing, then forward again. Writing out a word but skipping two letters ahead, crossing out, starting again. Stopping mid paragraph to have a sip of coffee.

What Dutch OSINT Guy was saying here resonates with me for sure - the act of taking a blurry image into the photo editing software, the use of the manipulation tools, there seems to be something about those little acts that are an essential piece of thinking through a problem.

I'm making a process flow map for the manufacturing line we're standing up for a new product. I already have a process flow from the contract manufacturer but that's only helpful as reference. To understand the process, I gotta spend the time writing out the subassemblies in Visio, putting little reference pictures of the drawings next to the block, putting the care into linking the connections and putting things in order.

Ideas and questions seem to come out from those little spaces. Maybe it's just letting our subconscious a chance to speak finally hah.

L.M. Sacasas writes a lot about this from a 'spirit' point of view on [The Convivial Society](https://theconvivialsociety.substack.com/) - that the little moments of rote work - putting the dishes away, weeding the garden, the walking of the dog, these are all essential part of life. Taking care of the mundane is living, and we must attend to them with care and gratitude.