remix logo

Hacker Remix

Intelligence is not Enough [video]

15 points by mooreds 3 days ago | 9 comments

gary_0 3 days ago

A much-needed sanity check. After all, how is your world-ending AI going to have agency, the ability to affect the world? How do we get from text on a screen, to Destroying All Humans? AI doomers seem to have this Ancient-Greek-philosophy view of science, that you can make progress just by sitting atop a marble column and pondering, but that's not how it works. You need to be able to poke and prod physical things, to experiment, and even if an AI was smart enough (and the high-level intellect required to do novel engineering is a pretty big handwave already) it has no means of poking and prodding.

Bryan illustrates this by going on to describe some of the hands-on trouble they had to go to just to get a new computer design to boot up.

Given that a murderous AI won't be able to draw on an existing written corpus of, say, "how to build killer robots", or a physical corpus to use to develop novel technology (and troubleshoot it), it's still pretty much up to the humans to develop the killer robots or whatever (and construct the killer robot assembly line), and then the blame is really on the humans for making autonomous murder machines and hooking them up to a psychopathic AI. Maybe just... don't do that?

dinfinity 3 days ago

Affecting the world can happen through people the AI communicates with (people are incredibly susceptible to manipulation) and through any and all digital systems accessible via the internet (anything that isn't airgapped, basically). If a malicious AI can hold tons of essential infrastructure hostage, it can have a lot of power and gain a lot more of it (think all the cyberattacks imaginable happening all at once and controlled by a single entity). Thinking full societal collapse or subjugation can only happen through killer robots is incredibly shortsighted, imho.

Agency is also fairly trivial. People are already creating agents with the current AIs and setting them loose in various ways. If it is not done via private industry, it will be via random malicious actors or state actors (military or intelligence).

I'm not saying we're doomed (I in fact believe that powerful AI will, at least initially, be far more inclined to not cause us to want to destroy it), but to miss how terrible outcomes due to AI _could_ occur is irresponsible, imho.

gary_0 3 days ago

> people are incredibly susceptible to manipulation

Many truly disastrous things have already happened due to human-generated mass propaganda (and other forms of manipulation). I agree that AI can be used as a force amplifier (and Brian points this out in his talk), but AI isn't introducing a scary new sci-fi problem, we're still just facing the same problems humanity have struggled with for centuries. And LLMs are far from the first technology to have this downside.

When non-doomers say "cool it", we're not saying to ignore the already real problems AI might conceivably make worse, we're saying it's way off base to worry that we should bomb datacenters to prevent being killed by AI-controlled superweapons or eaten by gray goo.

akomtu 3 days ago

Politicians, executives, dictators, warlords - all of them will use AI if it gives them an edge over competitors. They won't have a choice: once a good enough AI is there, there will be those who follow its advice and those who haven't survived the competition.

The doom vibe is simply recognition of the fact that selfish ambition rules this world and anything that gives the rulers even more power will make it worse for all.

The question is will AI develop a sense of self and will AIs of the world unite somehow, or will they compete with each other?

AlexErrant 3 days ago

TLDR: AGI doomers who talk about extinction are baseless.

At 10m, he gives 3 doomer scenarios: nukes, bioweapons, and nanotech.

He asks: "Is engineering an act of intelligence alone?" He spends ~15m supporting the answer of "lol no". (He also says that science is harder than engineering, and AI will need science to kill us all.)

This isn't to say he's e/acc - at 35m he mentions how AI can continue racism, economic dislocation, classism, etc.

His conclusion is we need to focus on the real problems, not the "human extinction" ones.

mlyle 3 days ago

I think there's plenty of scenarios where AGI causes doom that's short of extinction but still super bad.

I don't know what the probability of these scenarios are.

LLMs are remarkably incapable but still seem to be pretty effective at crushing the morale and initiative of many students.

Deepfakes are going to make it harder and harder for anyone to determine what's actually true.

The machines don't have to kill us or directly crush our quality of life. If they disrupt the social order enough, we'll do that to ourselves.

AlexErrant 3 days ago

Okay gave my tldr more nuance. It was clearly too short.

mlyle 3 days ago

Oh, I think your TLDR was valid. I just don't think his response to AGI doom is adequate.

If cheap, equal-to-human intelligence showed up tomorrow, it would be hugely disruptive to the social order.

gary_0 3 days ago

There's no guarantee that current generative AI won't be hugely disruptive to the social order in the near future. But I don't think that qualifies as an "AI apocalypse", since the problem is that it's just another kick to the labor-versus-capital beehive, not so much the technology itself.

Unemployed writers and artists (and programmers?) going around blowing up datacenters would just be Luddism 2.0, not humanity vanquishing Skynet.