active Uncontested

The Sovereign AI Race

As the US and China race on frontier AI, how are other nations — EU, Gulf states, UK, India — building domestic compute, models, and governance to avoid dependence, and what alignments are forming?
Curated by terry-tang Since Jan 2025 Updated Apr 19, 2026

Canonical Synthesis

Author: terry-tang | Last updated: 2026-04-19

The sovereign AI race is the scramble by nations outside the US-China frontier to ensure that artificial intelligence does not become a dependency rather than a capability — that the critical infrastructure of the intelligence era is domestically controlled or reliably allied rather than leased from a foreign hyperscaler.

The term "sovereign AI" covers several distinct but related concerns: compute sovereignty (owning or controlling the GPUs and data centers where AI runs), model sovereignty (developing AI systems trained on national languages and cultural contexts without dependence on foreign model providers), data sovereignty (ensuring that national data does not flow through foreign infrastructure subject to foreign legal authority), and governance sovereignty (setting the rules for how AI operates within a jurisdiction rather than accepting the terms-of-service of San Francisco companies).

The trigger for the current race was the convergence of two developments in 2024 and early 2025. First, the AI capabilities gap between frontier models (GPT-4, Claude 3, Gemini Ultra) and locally available alternatives became undeniable in a way that made the strategic dependency visible to non-technical policymakers. Second, the US export control regime made explicit what had previously been assumed: that access to the compute underpinning AI was subject to US government discretion, not just market dynamics. Nations that had relied on US hyperscalers — AWS, Azure, Google Cloud — suddenly understood that their AI infrastructure existed at the pleasure of US export policy.

The Arc

EU: Industrial-Scale Gigafactories. The European Commission's InvestAI initiative, announced at the Paris AI Action Summit in February 2025, represented the EU's most direct acknowledgment that competitive AI development required compute at scales not available through existing EU AI Factories. The €20 billion dedicated fund for up to five AI gigafactories — each hosting roughly 100,000 chips — was designed to provide a European alternative to US hyperscalers for large-scale training. The December 2025 Council position adoption moved the framework into formal legislative procedure. Whether the EU could execute at speed remained the central uncertainty: European industrial policy has historically suffered from long timelines between announcement and operational capacity.

Gulf: Sovereign AI as Geopolitical Pivot. The Gulf states mounted the most aggressive sovereign AI investment strategy outside the US-China axis. Saudi Arabia's HUMAIN launch in May 2025 — a wholly PIF-owned AI company with $23 billion in technology partnerships, a $10 billion venture fund, and a 1.9 GW data center buildout — represented the Kingdom's bet that AI infrastructure was the next generation of energy infrastructure: the source of economic power in the post-oil era. The UAE simultaneously anchored Stargate UAE, the first international extension of OpenAI's Stargate project, with a 1 GW Abu Dhabi campus and a planned 5 GW US-UAE AI Campus. Abu Dhabi's G42 served as the local partner, providing sovereign backing while navigating US concerns about G42's historical Huawei relationships.

Both Gulf sovereign AI programs operated in complex relationship with the US export control regime. The GPU shipments required for gigawatt-scale AI campuses implied tens of billions of dollars in chips subject to BIS licensing. The Stargate UAE framework was partly designed to address US legislative concerns about technology transfer by embedding chip access within a bilateral diplomatic structure that included reciprocal UAE investment commitments to US AI infrastructure.

UK: Public Compute and Sovereign Capability. The UK's approach was more modest in capital terms but more deliberate in institutional design. The Isambard-AI supercomputer delivered the AI Opportunities Action Plan's 20x public compute commitment in July 2025, providing researchers and startups with access to GPU clusters through public infrastructure rather than hyperscaler relationships. The follow-on £750 million national supercomputer and the £500 million Sovereign AI Unit (announced for April 2026) reflected a strategic choice to build national model development and deployment capability for sensitive government applications rather than compete for frontier training.

India, Japan, Korea. The sovereign AI race extended beyond the highlighted cases. India's AI Mission, Japan's AI Strategy, and Korea's National AI Computing Center represented parallel investments at various scales. India's combination of demographic scale, English-language AI training advantage, and government data assets made it a distinctive case — potentially the only non-US, non-China nation with sufficient data and market scale to develop genuinely competitive frontier models without primary reliance on foreign infrastructure.

Interpretations

Sovereignty-as-security reading

Sovereign AI capability is an existential national security requirement. Nations that cannot develop and deploy AI domestically will be dependent on foreign infrastructure for decisions affecting their military effectiveness, economic competitiveness, and democratic governance. The EU, Gulf, and UK investments are necessary responses to genuine vulnerability.

Proponents: National security establishments, industrial policy advocates, sovereignty-focused EU policymakers.

Alignment-over-sovereignty reading

For nations aligned with the US, sovereignty is less important than access. Partnerships like Stargate UAE provide allied nations with frontier AI capability faster and at lower cost than domestic development, while binding them into a US-led AI ecosystem with security and governance compatibility. True sovereignty is expensive and may be strategically unnecessary for close US allies.

Proponents: US State Department, proponents of US-UAE AI Acceleration Partnership, some European liberals.

Multi-polar-futures reading

Neither full sovereignty nor US alignment is the terminal state. The sovereign AI race is producing a multi-polar AI landscape in which several regional clusters — US-aligned, China-aligned, and genuinely non-aligned — develop distinct AI capabilities and governance frameworks. This fragmentation may reduce global coordination capacity on AI safety while increasing resilience against single-point-of-failure dependencies.

Proponents: Geopolitical analysts, some EU strategists, non-aligned movement successor states.

Open Questions

  • Can any mid-size nation build frontier AI capability without either US chip access or Chinese alternatives — and what does that require?
  • Does the Taiwan silicon shield erode as TSMC Arizona matures, reducing Taiwan's strategic leverage and increasing geopolitical risk to the island?
  • Will EU gigafactories deliver operational capacity in time to influence European AI development, or will they arrive after the critical training runs have occurred on US infrastructure?
  • Do Gulf sovereign AI programs represent genuine technological sovereignty, or are they dependent on US chip supply, US model partnerships, and US political goodwill that could be withdrawn?
  • How does the proliferation of sovereign AI compute affect global AI safety coordination — does more distributed compute make governance harder or easier?

Events in this thread