7 points by rustastra 15 hours ago | 11 comments
Terr_ 15 hours ago
Both falsely imply that there's a solvable mechanical difference going on between results people like versus results people dislike.
ggm 14 hours ago
Terr_ 1 hour ago
karmakaze 9 hours ago
Edit: "mis-remembering" is another good term as it's using its vast training to output tokens and sometimes it maps them wrong.
bell-cot 14 hours ago
ggm 13 hours ago
I continue to respect Hinton, he uses and chooses his words carefully I think.
seanhunter 11 hours ago
“It’s just completely obvious that within five years deep learning is going to do better than radiologists.… It might be 10 years, but we’ve got plenty of radiologists already.” - Geoffrey Hinton 2016 [1]
He missed the 5 year deadline by a lot and it currently looks extremely unlikely his 10 year deadline is any good either.[1] https://newrepublic.com/article/187203/ai-radiology-geoffrey...
meltyness 9 hours ago
The current batch is trained on just text afaik.
perfmode 11 hours ago
What rational agent is infallible?
elicksaur 9 hours ago
LLMs also have no grounding in abstract concepts such as true and false. This means they output things stochastically rather than logically. Sometimes people are illogical, but people are also durable, learn over time, and learn extremely quickly. Current LLMs only learn once, so they easily get stuck in loops and pitfalls when they produce output that makes no sense to the human reader. The LLM can’t “understand” that the output makes no sense because it doesn’t “understand” anything in the sense that humans understand things.
krapp 11 hours ago
For some reason, LLMs are the exception. It doesn't matter how much they hallucinate, confabulate, what have you, someone will always, almost reflexively, dismiss any criticism as irrelevant because "humans do the same thing." Even though human beings who hallucinate as often as LLMs do would be committed to asylums.
In general terms, the more mission critical a technology is, the more reliable it needs to be. Given that we appear to intend to integrate LLMs into every aspect of human society as aggressively as possible, I don't believe it's unreasonable to expect it to be more reliable than a sociopathic dementia patient with Munchausen's syndrome.
But that's just me. I don't look forward to the future when my prescriptions are written by software agents that tend to make up illnesses and symptoms and filled out by software agents that cant' do basic math, and it's all considered ok because the premise that humans would always be as bad or worse and shouldn't be trusted with even basic autonomy has become so normalized we just accept the abuse of the unstable technologies rolled out to deprecate us from society as inevitable. Apparently that just makes me a Luddite. IDK.