The other AI apocalypse
AI doesn’t have to destroy us, we’re doing a fine job of that on our own.
You can’t throw a rock at a crowd without hitting someone who thinks AI will one day become powerful enough to pose an existential threat to the human race.
Here at the Center for AGI Investigations we have no time for those people.
The popular apocalypse
It takes a PhD-level leap of faith to go from “this chatbot got fired from a fast food restaurant” to “this chatbot is going to destroy all life on Earth.”
Anthropic co-founder Jack Clark recently discussed this strange disconnect in Import AI 401:
“A lot of people that care about the increasing power of AI systems and go into policy do so for fundamentally eschatological reasons - they are convinced that at some point, if badly managed or designed, powerful AI systems could end the world. They think this in a literal sense - AI may lead to the gradual and eventually total disempowerment of humans, and potentially even the death of the whole species.”
Clark goes on to question the sanity of these individuals. He also wonders about their capacity for empathy towards policymakers.
We’re more concerned about the distraction this existential nonsense creates.
Existential nonsense
There isn’t a serious technologist, researcher, or scientist working in the field of AI who isn’t aware of the potential harm vectors for AI.
The Center’s own Tristan Greene wrote a series of articles on the subject five years ago titled “A beginner’s guide to the AI apocalypse.” At this point, anyone who isn’t aware of the potential for instrumental convergence or misalignment has either never visited the tech section of any news website or somehow missed the hundreds of films depicting rogue machines.
War Games, a film about an AI nearly starting a nuclear war, was released in 1983. It’s a safe bet that Sam Alman, Elon Musk, Yann LeCun, and Jack Clark are well aware of the underlying concepts.
The biggest threat these machines pose to the human race has nothing to do with an “AI takeover.”
The other apocalypse
All of the fear-mongering behind the emergence of “human-level” AI or “artificial general intelligence” actually serves the interests of the firms developing advanced AI.
If, for example, you believe OpenAI is capable of building a machine so powerful that it could rise up and destroy humanity, then you also believe OpenAI is capable of creating a machine that’s more “intelligent” than humans are.
And, if you believe that, then you also believe that all we have to do is figure out how to make these machines safe and VIOLA: all of our problems will be solved by godlike machine servants!
There’s only one problem: by our estimations, our species has spent about seven trillion dollars on the development of so-called “human-level” artificial intelligence. And there’s no end in sight.
The firms developing these tools do not make money off of them. OpenAI and Anthropic have no revenue. Microsoft, Google, and Amazon don’t make money off of AI tools.
In order to justify the continuing (and inflating!) expenses, one of two things must happen:
AGI must magically emerge from the continued scaling of 10-year-old techniques.
Consumer-facing tools such as ChatGPT must become so useful that the general public is willing to pay to use them.
Ultimately, it’s AGI or bust.
Normally, this wouldn’t be such a big problem. Companies go out of business everyday. But none of those companies have trillions of dollars in investments, deep government integrations, and infrastructure deals involving thousands of other companies.
If AGI happens, all we have to do is make sure we give it an off button. But if it doesn’t happen, all of that money has to be justified through consumer use cases.
Either you believe that GPT technology is a precursor for what would essentially be digital godhood or you believe that chatbots are more monetizable to the general public than Netflix is.
The longer it takes to find out people are wrong about both assumptions, the more damage will be done to the global economy.
Read next: Center for AGI Investigations: Status report April 2025
Art by Nicole Greene