- More than an economic shock: Unlike earlier technological revolutions, the AI transition is reshaping not only how economies produce but who gets to set the rules governing them — a shift with profound political consequences.
- Governance at risk: As AI systems become embedded across finance, health care, education, and public administration, the firms controlling foundational models acquire quasi-governance roles that historically belonged to public institutions.
- An unexpected alignment: Large technology platforms and populist movements share a structural scepticism toward regulatory agencies, independent courts, and multilateral frameworks — for different reasons, but with convergent effects.
- Europe’s asymmetry: The European Union leads globally in digital regulation yet remains heavily dependent on external providers for core AI infrastructure, raising urgent questions about whether democratic societies can govern technologies whose key components lie beyond their political control.
- The defining question: The central challenge of the AI age is not whether machines surpass human intelligence, but whether the infrastructures shaping collective life remain under democratic governance or drift toward concentrated private power.
Artificial intelligence has been compared to earlier technological transformations, from the steam engine to the internet. Yet the analogy oversimplifies. Previous transformations essentially reshaped how economies produce. The AI transition is also reshaping governance — it is increasingly about who gets to set the rules of the game.
At first glance, AI appears to follow a familiar pattern: a revolutionary general-purpose technology spreading across sectors, boosting productivity and transforming labour markets. The concentration of economic power it produces, however, is not comparable to anything seen in earlier technological eras. A small number of large technology firms control all the infrastructure required to develop and deploy advanced AI — hyperscale computing, vast proprietary datasets, specialised semiconductors, and frontier models.
That concentration is driven by steep barriers to entry and the network effects that characterise the digital economy. Training and operating advanced AI systems demands enormous fixed investments, energy, and access to global-scale computing resources and data. As a result, AI is emerging not as a widely diffused technology but as one whose core capabilities — and economic rents — are clustered in a few corporate and geographic hubs.
Much of the public debate has focused on the economic consequences of this concentration: the potential for job displacement, rising inequality, and widening gaps between technological leaders and laggards. These concerns are real. But the most profound effects of AI may ultimately prove political rather than purely economic.
Smart, Progressive Thinking on the Big Issues of Our Time
Join 20,000+ informed readers worldwide who trust Social Europe for smart, progressive analysis of politics, economy, and society — free.
When control over foundational technologies becomes concentrated, market power can evolve into rule-setting power. In earlier industrial eras, dominant firms wielded significant economic influence, but the authority to define market rules — competition policy, labour standards, financial regulation — remained largely in public hands. Today that boundary is becoming increasingly blurred.
Major technology platforms do not simply operate within regulatory systems; they actively shape them. As AI systems become embedded across finance, health care, education, public administration, and communication, the firms controlling foundational models and computing infrastructure acquire quasi-governance roles. Decisions about model design, access, pricing, and deployment carry broad social and economic consequences. In earlier eras, technological power translated into economic dominance. In the age of AI, it risks translating into governance capacity as well.
The governance paradox
This dynamic produces a striking paradox: debates with major implications for national security, economic competition, and information governance are increasingly shaped not by public institutions but by the strategic choices and rivalries of a small number of private technology firms.
This transformation is unfolding at a moment when many democracies are already experiencing heightened polarisation and growing distrust in political institutions. Over the past decade, economists and political scientists have documented how economic dislocation and the perceived loss of control contributed to the rise of populist movements. Trade shocks, automation, and the steady delegation of policymaking authority to unelected experts and bureaucratic agencies fostered a pervasive sense that key decisions affecting citizens’ lives were being made beyond democratic reach.
Artificial intelligence risks intensifying that perception. Like earlier waves of technological change, it generates significant distributional tensions: between high- and low-skill workers, leading and lagging regions, capital and labour. But it also introduces a more subtle source of political friction: the growing sense that economic and social outcomes are increasingly governed by opaque technological systems controlled by distant corporate actors. The political economy of AI is not only about who gets richer — it is about who gets to decide.
An unexpected alliance
Yet the relationship between large technology firms and contemporary populist politics is more complex than simple opposition. In some respects, the structural incentives they face are beginning to align. Both dominant digital platforms and populist movements share a deep scepticism toward institutional constraints that slow decision-making and impose oversight.
Regulatory agencies, independent courts, and multilateral governance structures are routinely portrayed by populist leaders as obstacles to decisive political action. For large technology firms, those same institutional frameworks can appear as barriers to rapid scaling, market expansion, or data access and use. The motivations differ, but the logic converges: digital monopolies and populist politics alike tend to treat institutional constraints less as safeguards of democratic governance than as obstacles to speed, scale, and control.
Recent political developments illustrate the dynamic vividly. In the United States, several proposals associated with populist political agendas — from weakening independent regulatory agencies to reducing antitrust enforcement or limiting the scope of federal oversight over digital platforms — would effectively expand the operating space of large technology firms. Similar patterns can be observed elsewhere, where attacks on courts, regulators, or supranational institutions often have the side effect of eroding the institutional constraints that historically mediated the relationship between markets and democratic governance.
The information environment reinforces this dynamic further. Digital platforms play a central role in shaping how information is produced, distributed, and monetised. Political actors increasingly rely on these infrastructures to mobilise support and communicate with voters, often bypassing traditional intermediaries such as parties, unions, and legacy media. AI is poised to deepen this transformation. Generative AI systems are beginning to reshape how text, images, and video are produced and circulated, blurring the boundary between authentic and synthetic information — and increasingly between the true and the false. Control over foundational models and computing infrastructure is rapidly becoming strategically significant not only for economic competition but for shaping political debate itself.
Governing technology?
None of this is fated. Technological revolutions do not have predetermined consequences and do not automatically erode democratic governance. But, as with any form of societal change, their political consequences depend heavily on how the transition is governed and how the gains are distributed.
For Europe in particular, the challenge is acute. The continent has become a global leader in regulating digital markets but remains heavily dependent on external providers for the core infrastructures of AI. This asymmetry raises a deeper question: can democratic societies effectively govern technologies whose key infrastructures lie largely outside their political control?
In the absence of that capacity, the governance of AI risks drifting toward a system in which private technological power increasingly defines the boundaries of democratic authority.
The defining question of our age may not be whether machines become more intelligent than humans, but whether the infrastructures that shape collective life remain governed by democratic institutions rather than by ever more concentrated private power. Whether AI proves compatible with democratic governance will depend less on the technology itself than on how power over its development and deployment is structured — and on who ultimately gets to write the rules.
Help Keep Social Europe Free for Everyone
We believe quality ideas should be accessible to all — no paywalls, no barriers. Your support keeps Social Europe free and independent, funding the thought leadership, opinion, and analysis that sparks real change.
Social Europe Supporter
— €4.75/month
Help sustain free, independent publishing for our global community.
Social Europe Advocate
— €9.50/month
Go further: fuel more ideas and more reach.
Social Europe Champion
— €19/month
Make the biggest impact — help us grow, innovate, and amplify change.

No comments yet. Be the first to comment!