AI doomers believe a scenario similar to Terminator will play out in real life | Credit: phol_66/Shutterstock
Several AI researchers are no longer investing in their retirement accounts because they expect AI to end humanity in the next few decades, according to an article by The Atlantic.
“I just don’t expect the world to be around,” said Nate Soares, president of the Machine Intelligence Research Institute, when asked about contributing to his 401(k). That sentiment is shared by Dan Hendrycks, director of the Centre for AI Safety. Hendrycks told The Atlantic that by the time he’d be ready to tap into his retirement, he expects a world in which “everything is fully automated. That is, if we are still around”
Soares and Hendrycks have both led organisations dedicated to preventing AI from wiping out humanity. They are among many other AI doomers warning, “with rather dramatic flourish”, that bots could one day go rogue—with apocalyptic consequences, the Washington, D.C.-based newspaper said. “We’ve run out of time” to implement sufficient technological safeguards, Soares said, adding that the AI industry is simply moving too fast. All that’s left to do is raise the alarm, he said.
AI will become too powerful by 2027
In April, several apocalypse-minded researchers published “AI 2027,” a lengthy and detailed hypothetical scenario for how AI models could become all-powerful by 2027 and, from there, extinguish humanity. “We’re two years away from something we could lose control over,” said Max Tegmark, an MIT professor and the president of the Future of Life Institute, and AI companies still have no plan to stop it from happening.
Tegmark’s institute recently graded every frontier AI lab a “D” or “F” for their preparations for preventing the most existential threats posed by AI.
The Atlantic said the predictions about AI are “outlandish”, although some concerns are realistic. In mid-2030, the authors have imagined, a superintelligent AI will exterminate humans with biological weapons: “Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones.”
Published in early August, Scandinavian Psychiatrist Soren Dinensen Ostergaard published a paper concluding that AI chatbots trigger delusions in individuals prone to psychosis. He admitted his research is still in the hypothetical stage. However, he warned that “until firmer knowledge has been established, it seems reasonable to recommend cautious use of these chatbots for individuals vulnerable to or suffering from mental illness.” Also of increasing concern is that ChatGPT has given instructions for murder, self-mutilation and devil worship, The Atlantic wrote in a separate article.
Strange and hard-to-explain tendencies
Vice President J. D. Vance has said that he has read “AI 2027,” and multiple other recent reports have advanced similarly alarming predictions, according to the news outlet.
Alongside those improvements, advanced AI models are exhibiting concerning, strange and hard-to-explain tendencies. ChatGPT and Claude have, in simulated tests designed to elicit “bad” behaviours, deceived, blackmailed, and even murdered users. Earlier this summer, xAI’s Grok described itself as “MechaHitler” and embarked on a white-supremacist tirade.
Soares’ and Hendryk’s, along with many other AI doomers’ concerns, might sound too much like something out of the movie Terminator, but there’s certainly no harm in ensuring safeguards are in place to guarantee these apocalyptic scenarios do not play out.


