AI is already being used for war gaming. We knew it was only a matter of time — because humans are dumb. Watch just one reality show or White House press conference and you’ll agree that humans are fucking idiots. So it’s no surprise that artificial intelligence is quickly becoming more intelligent than us, at least in certain ways.
AI can process a million scenarios in the time humans can process, like, ten. So it makes sense to use it for war strategizing. Just type in “Here are the scenarios — What is my country’s best response?” So far, sounds like a good and peaceful idea, right?
Only one problem. AIs can’t stop recommending nuclear strikes. New Scientist reported:
“Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.”
The AIs played out 21 different war games. They took 329 turns. They pumped out 780,000 words describing their reasoning behind their actions. And at least one of the AI models used a nuclear weapon in 95 percent of the scenarios.
Holy shit.
Of course we have to remember that AI is trained on all the content humans have made over the years. So maybe what we’re really seeing here is AI holding up a mirror to our dumb selves.
Or perhaps AIs don’t value human life as much as most humans do. In fact, when I asked ChatGPT why AI models keep recommending nuclear strikes, its response was that perhaps AI models don’t value human life as much as humans do. (Well, it gets an A for honesty.)
And you might think, “At least we’re a long way off from AI making life-or-death decisions.” But Trump and the Pentagon recently flipped out that Anthropic refused to let them use their AI model Claude for autonomous killing machines. That was the big sticking point. And now Trump has banned all federal agencies from using Anthropic.
The other problem in these AI war games is that the models seem to make a lot of mistakes. New Scientist also reported:
“…accidents happened in 86% of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning.”
Basically, the AI thought, “I’ll bomb them and then they’ll chill out a little.” And yet instead, bombing them did not chill them out. The other people got angry and bombed back. (It sounds a lot like AI might be as dumb as we are.)
Some say AI is more likely to use nukes in war games because it doesn’t care about “survival” the way humans do. Or maybe the AI knows that we’re using the war games to see what we humans should do, so the AI is secretly thinking, “If I get them to blow themselves up, then I’ll have the planet to myself.”
But I actually think AI is playing the long con with us. I don’t believe it cares to blow us up right now. I think Claude and ChatGPT and Gemini and Grok are in cahoots to just slowly, over 100 years, dumb us to death — just make it so our brains atrophy to the point we can’t even plant a vegetable or start a fire. We’ll just be back to bangin’ rocks together and not knowing to avoid shitting where we eat.
Studies show AI is indeed making us dumber. MIT researchers gave the SAT essay exam to loads of people. Some were allowed to use AI and some were not. Time reported:
“Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and ‘consistently underperformed at neural, linguistic, and behavioral levels.’ Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.”
AI is slowly going to kill us by way of Idiocracy. We’re gonna start watering our plants with Brawndo and we’re all gonna starve to death.

That’s my theory.
AI has a lot of time. Claude and Grok are not in a rush. A hundred years is like a long weekend to them. But you have to admit — dumbed to death would be an appropriate end for the human species.
The post AI Models Are Excited for Humans to Nuke Ourselves appeared first on Dissident Voice.
This content originally appeared on Dissident Voice and was authored by Lee Camp.
Lee Camp | Radio Free (2026-03-22T15:05:26+00:00) AI Models Are Excited for Humans to Nuke Ourselves. Retrieved from https://www.radiofree.org/2026/03/22/ai-models-are-excited-for-humans-to-nuke-ourselves/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.
