For a short month, February’s final week packed in events guaranteed to last us a lifetime. It wasn’t just the United States and Israel going to war with Iran; hidden behind the fury unfolding across the Middle East was a meaningful week for AI, with effects that will be felt by all of humanity. Consider a sampling:
Besides the usual warfighting inventory—cruise missiles, warships, B-2 stealth bombers, high mobility artillery rocket systems, drones—U.S. Central Command reportedly deployed Claude in the Iran operations for intelligence assessments, target identification, and battle simulations. AI’s use offers clues for how the U.S.-Israeli forces pulled off a stunningly swift decapitation of Iran’s leadership, even though it offered no guarantee of strike accuracy. A collateral hit on Shajarah Tayyebeh girls’ elementary school in southern Iran killed 175 people, most of them likely children, according to Iranian authorities.
Meanwhile, Anthropic, the maker of Claude, was expelled by the Pentagon for refusing to allow its models to be used to power autonomous weapons and mass surveillance. It was replaced by its rival, OpenAI, which promises guardrails but offers no evidence they exist.
Separately, on Feb. 26, one of the most closely followed tech luminaries, Jack Dorsey, announced layoffs of 40 percent of the workforce of his company, Block, since he said AI could take over the work; the stock market cheered. Just four days before that announcement, a piece of speculative financial fiction from Citrini Research painted a scenario of widespread AI-induced job displacement in 2028, causing prosperity for a few alongside broad societal decline. The Dow reacted by dropping over 800 points. The same week, software sector stocks rebounded after losing $1 trillion in value earlier in February, as investors feared that AI-generated code could do the work of these companies.
These developments came on the heels of a viral post, warning that AI is advancing faster than most users realize. Yet February also closed with news of data center construction cancellations in the United States partially due to widespread community resistance. And considering the new regional fragility, pledges made by Saudi Arabia, Qatar, and the United Arab Emirates to establish an AI infrastructure hub are now in question.
The war on Iran has had other domino effects, too. The semiconductors that the AI industry relies on need critical supplies of helium and sulfur that pass through the Strait of Hormuz, which is now effectively unnavigable.
These are not separate stories. They are facets of a single phenomenon: a convergence of technological, economic, geopolitical, and institutional risks that have ratcheted up recently, suggesting that we are lurching toward an “AI doomsday”; that is, a situation in which, despite its many benefits, the technology can make society significantly worse overall. This is not due to a single force, such as existential threats, job devastation, or autonomous deployment of weapons. There is, rather, a system of connected AI-related forces contributing to unchecked problems paired with inadequate institutional coordination and a dearth of leadership with the imagination to get ahead of problems before they escalate.
For over 35 years, I have studied and worked on the impact of AI and digital technologies through high tech research and development; advising tech industry leaders; and leading Digital Planet, a research center on the global digital economy, for 15 years. Let me also declare that I’ve been an AI enthusiast since 1991, through the second “AI winter” when the technology was far from cool.
My own experience and research over the years suggests that the technology can be transformational in a breadth of areas. As a researcher, I experience the power of AI tools and appreciate their drawbacks along with potential solutions.
Yet I am now placing a high probability on an AI doomsday. Let me count the seven horsemen of a possible AI apocalypse.
Jobs Displacement
Our new Digital Planet study, “Will Wired Belts Become the New Rust Belts?” analyzes AI’s impact on 784 occupations in every major U.S. industry and the economic effects on locations across the country. We find every 1 percentage point of job automation will be accompanied by a 0.75 percentage point job loss. Workers whose tasks are most enhanced by AI are also most likely to be replaced by it. In less than five years, the United States can lose the equivalent of the economy of Belgium due to AI-led job displacement, and up to the equivalent of the economy of South Korea if AI adoption happens faster. Major job hubs such as the Washington, D.C. metro area, San Francisco, and the Boston region will be the most affected; 40 percent of job losses will be in California, Texas, New York, Florida, Illinois. This suggests the forthcoming arrival of a new Engels’ pause, the period during the early stages of the Industrial Revolution when working-class wages stagnated as industrial productivity increased, causing extreme income inequality, with social and political fallout.
Epistemic Crisis
According to Model Evaluation and Threat Research, a research institute, AI has been getting twice as good every seven months. However, there are several ways it can degrade from here. One is data scarcity. A 2024 study suggested that the supply of new human‑generated text suitable for training could be exhausted by 2032, leaving few options other than AI generating the data for its own training. When models are repeatedly trained on such synthetic inputs, degradation of the output is inevitable, leading to what’s known as “model collapse.”
In fact, over 90 percent of all web content may already be AI-generated. The proliferation of AI slop (large volumes of low-quality content) and disinformation—already amplified during elections, conflicts, and other critical socio-political transitions—degrades the inputs.
A third cause for the looming epistemic crisis has to do with users themselves. Users lose the capacity to distinguish real from AI-generated content and stop trying; persistent AI reliance could degrade skills among students and across occupations ranging from coders to medical professionals or employees across industries. No amount of productivity enhancement can compensate for cognitive abilities lost due to over-reliance on AI.
Infrastructure Chokepoints

Guests look at a model of what is expected to be the largest data center in the United Arab Emirates when construction is complete, seen in Abu Dhabi on Nov. 3, 2025. Giuseppe Cacae/AFP via Getty Images
AI’s advances must be matched with supporting infrastructure, especially in energy. Data centers will account for about half of U.S. power demand growth the remainder of this decade, according to the International Energy Agency. The agency also warns of potential delays in the construction of 20 percent of planned data centers. Wars in the Middle East are exacerbating the energy crunch. Even as AI companies pledge to pay for rising energy costs, their efforts may not ease the stress in a timely manner, as quicker-build solar and wind power sources have been shunned by the Trump administration.
A second infrastructural deficiency is less tangible. The United States and western democracies face a public trust deficit in AI. Combatting this requires investment in a trust infrastructure, including objective evaluations of AI systems, safety, explainability, governance tests, and preemptive management of the most immediate risks, especially jobs displacement. For now, this infrastructure is sorely missing and trust is dropping.
Wars, Cold and Hot
The AI cold war between the United States and China is a multi-dimensional rivalry spanning chip supply chains, energy grids, classified military networks, and dueling AI models. While the United States leads in compute power—its AI compute of 39.7 million petaflops (which measures computer calculations per second) outpaces China’s 400,000 petaflops—China has separate advantages that intensify the rivalry. China’s manufacturing dominance, ability to coordinate state resources, and energy infrastructure are powerful assets. Its 80-100 percent energy reserve margin versus the United States’ 15 percent could prove decisive. China leads in open-source AI and its citizens trust AI more.
Unlike the previous Cold War, this one has led to few major scientific breakthroughs; frugal Chinese innovation in developing cutting-edge AI models without cutting-edge chips is a rare exception. There’s a growing fragmentation of the AI ecosystem alongside mutual suspicion due to persistent IP theft, digital sabotage, and absence of uniform frameworks and standards. The rivalry also makes automated attacks on power grids or data centers more likely. Beyond the superpowers, weaker actors can impose disproportionate costs using AI-assisted attacks and perpetuating conflict by reducing the deterrent value of military superiority. The Iran war has already heralded a new era of AI-powered warfighting and narrative manipulation. Meanwhile, international efforts to regulate weaponization of AI lag behind the pace of deployment.
Absent Institutions

A figure in front of the logo of artificial intelligence company Anthropic during a photo session in Paris on Feb. 13. Joel Saget/AFP via Getty Images
As noted earlier, the trust infrastructure is critical, but it requires institutional safeguards facilitated by resources, vision, and political courage. Unfortunately, such systems have been framed as speed bumps in the AI race, especially in the United States. Not only has the Trump administration abandoned plans for regulating AI, it has actively discouraged companies from hardwiring safeguards; consider its designation of Anthropic as a “supply chain risk” for insisting on safeguards. Such dismissal of safety oversight from the global AI leader establishes a template for other nations, creating the conditions for a catastrophic downward spiral.
Fickle Markets

Indian Prime Minister Narendra Modi takes a group photo with artificial intelligence company leaders, including OpenAI CEO Sam Altman (center) and Anthropic CEO Dario Amodei (right), at the AI Impact Summit in New Delhi on Feb. 19. Ludovic Marin/AFP via Getty Images
The stock market is making guesses about the future of AI, and it swoons and rebounds based on the slimmest of signals: from viral LinkedIn posts to credulous articles about the newest models to speculative fiction about “ghost GDP.” Instead of reliable leaders with steady hands, we are left with dueling AI prophets, Sam Altman and Dario Amodei, who would not even hold hands during a photo op at the India AI Impact Summit; an erratic U.S. government egging the industry on; and volatile stock markets. Combined with a fast-moving technology, this doesn’t bode well for the management of technological transitions. Already, countries are dealing with the unchecked fallout of income inequality, job displacement, and the rise of anti-democratic politics; the AI transition becomes a multiplier of prevailing tensions.
The Convergence
None of these risks exist in isolation. A military AI escalation undermines efforts to build institutional safeguards and can also trigger an energy crisis as nations pursue sovereign AI. This in turn reinforces global fragmentation, making governance of AI impossible precisely when the trust infrastructure is most needed. As Wall Street rewards those CEOs who have laid off workers, and makes those who have not fear its invisible hand, job displacements pile up. The resulting exacerbation of societal inequalities generates a new breed of hostile white-collar political constituencies armed with tools and networks of activism that their blue-collar predecessors lacked. To add to the mix, AI acceleration is financed by markets that swing wildly on speculation, governed by administrations that contradict themselves weekly, and debated in an information environment that AI itself is degrading. The AI cold war not only prevents international coordination on these fronts, it also feeds the myth that governance controls would lead to losing the race. Each force creates conditions that worsen the others, producing a self-reinforcing cascade.
Instead of reflexively drawing technological parallels with the Industrial Revolution or electrification, AI ought to be considered alongside nuclear weapons—with analogous systemic risks, need for coordination, early years marked by brinkmanship, market speculation, and institutional inadequacy. It took the Cuban missile crisis to produce the frameworks of nuclear deterrence and equilibrium. Instead of waiting for an AI near-miss, we should preemptively study the lessons from nuclear history.
We can certainly benefit from even better AI model capabilities; however, the moment now requires shifting priorities to building trust architectures, governance frameworks, and coordination mechanisms. We have invested over $1 trillion in building the Ferrari. We have neglected the roads. Whether the opportunity window remains open depends on choices being made now in boardrooms, legislatures, and even AI summits, where the people responsible for one of the most powerful technologies in history cannot even agree to hold hands.

No comments yet. Be the first to comment!