15 points by mooreds 3 days ago | 9 comments
gary_0 3 days ago
Bryan illustrates this by going on to describe some of the hands-on trouble they had to go to just to get a new computer design to boot up.
Given that a murderous AI won't be able to draw on an existing written corpus of, say, "how to build killer robots", or a physical corpus to use to develop novel technology (and troubleshoot it), it's still pretty much up to the humans to develop the killer robots or whatever (and construct the killer robot assembly line), and then the blame is really on the humans for making autonomous murder machines and hooking them up to a psychopathic AI. Maybe just... don't do that?
dinfinity 3 days ago
Agency is also fairly trivial. People are already creating agents with the current AIs and setting them loose in various ways. If it is not done via private industry, it will be via random malicious actors or state actors (military or intelligence).
I'm not saying we're doomed (I in fact believe that powerful AI will, at least initially, be far more inclined to not cause us to want to destroy it), but to miss how terrible outcomes due to AI _could_ occur is irresponsible, imho.
gary_0 3 days ago
Many truly disastrous things have already happened due to human-generated mass propaganda (and other forms of manipulation). I agree that AI can be used as a force amplifier (and Brian points this out in his talk), but AI isn't introducing a scary new sci-fi problem, we're still just facing the same problems humanity have struggled with for centuries. And LLMs are far from the first technology to have this downside.
When non-doomers say "cool it", we're not saying to ignore the already real problems AI might conceivably make worse, we're saying it's way off base to worry that we should bomb datacenters to prevent being killed by AI-controlled superweapons or eaten by gray goo.
akomtu 3 days ago
The doom vibe is simply recognition of the fact that selfish ambition rules this world and anything that gives the rulers even more power will make it worse for all.
The question is will AI develop a sense of self and will AIs of the world unite somehow, or will they compete with each other?
AlexErrant 3 days ago
At 10m, he gives 3 doomer scenarios: nukes, bioweapons, and nanotech.
He asks: "Is engineering an act of intelligence alone?" He spends ~15m supporting the answer of "lol no". (He also says that science is harder than engineering, and AI will need science to kill us all.)
This isn't to say he's e/acc - at 35m he mentions how AI can continue racism, economic dislocation, classism, etc.
His conclusion is we need to focus on the real problems, not the "human extinction" ones.
mlyle 3 days ago
I don't know what the probability of these scenarios are.
LLMs are remarkably incapable but still seem to be pretty effective at crushing the morale and initiative of many students.
Deepfakes are going to make it harder and harder for anyone to determine what's actually true.
The machines don't have to kill us or directly crush our quality of life. If they disrupt the social order enough, we'll do that to ourselves.
AlexErrant 3 days ago
mlyle 3 days ago
If cheap, equal-to-human intelligence showed up tomorrow, it would be hugely disruptive to the social order.
gary_0 3 days ago
Unemployed writers and artists (and programmers?) going around blowing up datacenters would just be Luddism 2.0, not humanity vanquishing Skynet.