AI
Small States, Big Choices: Singapore’s Approach to Sovereignty in the Age of AI
How Singapore redefines AI sovereignty for small states—not as self-reliance, but as a spectrum of strategic postures across the AI stack.
When the world’s largest AI summit wrapped up in New Delhi last week, it produced the expected pageantry: 88 nations signing the New Delhi Declaration, heads of state taking photographs with Silicon Valley CEOs, and the familiar rhetoric about “democratizing AI.” Yet beneath the declarations, a far more candid conversation was unfolding in the corridors of Bharat Mandapam. As the TIME magazine observed, delegates from “middle powers” wrestled with an uncomfortable truth: the overwhelming majority of global AI compute, data, and frontier talent remains concentrated in the United States and China. For most nations, the gap between aspiration and capability is not just wide—it is structurally embedded.
Singapore, a signatory to the New Delhi Declaration and one of the summit’s quietly influential voices, understands this gap better than most. A city-state of 5.9 million people with no natural resources and a land area smaller than Los Angeles, Singapore has no plausible path to AI autarky. And yet, in the weeks surrounding the New Delhi summit, it unveiled one of the world’s most coherent national AI strategies—not by racing to build the biggest models or hoard the most chips, but by adopting a carefully differentiated set of postures across each layer of the AI stack.
This distinction matters enormously. For small, open economies navigating the age of AI, Singapore’s approach offers a template that is both intellectually serious and practically executable.
The Autarky Trap: Why the Sovereignty Debate Is Asking the Wrong Question
The concept of AI sovereignty has a seductive simplicity to it. Who owns the data? Who trains the models? Who controls the compute? In the mainstream framing—visible in the rhetoric of both Washington and Beijing—sovereignty is essentially synonymous with dominance. The nation that leads in AI leads the world.
This framing works reasonably well as geopolitical shorthand for the United States, which commands extraordinary concentrations of frontier AI infrastructure, and for China, which has matched that ambition with state-directed industrial policy on a massive scale. The EU, for its part, has staked its claim on regulatory sovereignty—shaping AI governance through the AI Act in ways that larger markets can afford to enforce. But for the vast majority of nations—including nearly all of Southeast Asia, the Middle East, Africa, and Latin America—the “race for self-reliance” framing is not merely unrealistic. It is actively misleading.
AI sovereignty, properly understood, is not a destination. It is a capacity: the ability of a state to make meaningful choices about how AI is developed, deployed, and governed within its borders and in its name. That capacity does not require building everything from scratch. It requires building in the right places, partnering wisely in others, and maintaining enough institutional coherence to keep choices in domestic hands.
Singapore’s National AI Strategy 2.0 (NAIS 2.0), launched in 2023 and now mid-implementation, offers what may be the clearest articulation of this alternative model in the world. Rather than pretending to compete with hyperscalers on their own terms, Singapore has asked a more precise question: where across the AI stack must we build sovereign capacity, and where can we safely depend on trusted partners?
Singapore’s Layered Strategy: Sovereignty Across the AI Stack
Understanding Singapore’s approach requires examining the AI stack not as a monolith but as a series of distinct layers—each with its own strategic logic, its own risk profile, and its own implications for sovereignty.
| AI Stack Layer | Singapore’s Posture | Key Initiatives |
|---|---|---|
| Compute | Selective self-sufficiency + trusted partnerships | NAIRD Plan; GPU clusters at NUS/NTU; ECI cloud partnerships ($150M) |
| Data | Domestic control with cross-border access frameworks | Privacy-Enhancing Technologies (PETs) R&D; unlocking government data |
| Foundation Models | Strategic independence via niche capability | SEA-LION multilingual LLM; international model collaboration |
| Applications | Broad deployment across key sectors | National AI Missions in manufacturing, finance, healthcare, logistics |
| Governance | Global standard-setting leadership | AI Verify toolkit; Project Moonshot; US-Singapore Critical Tech Dialogue |
Compute: Selective Self-Sufficiency
Singapore is not trying to build a domestic semiconductor industry. That race belongs to Taiwan, South Korea, and increasingly the United States and China. What Singapore is doing is ensuring it maintains adequate sovereign compute capacity for research and government use—while securing deep partnerships with global cloud providers for everything else.
The S$1 billion National AI Research and Development (NAIRD) Plan, running from 2025 to 2030, includes dedicated GPU infrastructure operated for the Singapore research community. Alongside this, Computer Weekly reports that a $150 million Enterprise Compute Initiative facilitates SME access to cutting-edge cloud AI tools through trusted commercial partners. This is not autarky—it is calibrated dependency: maintaining sovereign research capacity while leveraging global infrastructure for commercial scale.
Prime Minister Lawrence Wong was direct about this posture in his Budget 2026 speech: “Our advantage does not lie in building the largest frontier models.” Singapore is instead focused on deploying AI faster and more coherently than larger countries—a form of competitive advantage that requires institutional strength rather than raw technological scale.
Data: Domestic Control, Global Connectivity
Data sovereignty is the layer where small states arguably have the most to gain and the most to lose. Singapore’s approach here is nuanced: it is investing heavily in Privacy-Enhancing Technologies (PETs) that allow data to be used for AI training without being exposed or transferred, while simultaneously advocating for trusted cross-border data flows as a global norm.
This dual posture reflects Singapore’s economic reality. As a financial, logistics, and biomedical hub, Singapore processes an extraordinary volume of sensitive data from across Asia and the world. Restricting data flows would damage its economic model. Failing to protect data sovereignty would expose it to the kind of dependency that compromises meaningful agency. PETs offer a potential third path—allowing participation in global AI ecosystems without surrendering control over the underlying information.
Models: Strategic Independence Through Niche Capability
Singapore is one of the few small states to have invested in developing its own large language model. The SEA-LION (South-East Asian Languages in One Network) model, developed through IMDA, addresses a critical gap: Southeast Asian languages are dramatically underrepresented in global foundation models trained primarily on English-language data. This is not merely a cultural concern—it has concrete consequences for healthcare AI, legal AI, and government services across the region.
SEA-LION represents a specific kind of sovereign capability: not competing with OpenAI or Google on frontier reasoning, but ensuring that AI applications serving Singapore and the broader region reflect local languages, contexts, and values. It is sovereignty by differentiation rather than by scale.
Applications: Depth Over Breadth
Budget 2026’s establishment of National AI Missions in four sectors—advanced manufacturing, connectivity and logistics, finance, and healthcare—signals a deliberate concentration of deployment effort. Rather than spreading AI adoption thinly across the entire economy, Singapore is betting on achieving genuine transformation in sectors where it has comparative advantage and where AI can address its most pressing structural challenges: a tight labour market and an ageing population.
The accompanying “Champions of AI” program offers enterprises 400% tax deductions on qualifying AI expenditures (capped at S$50,000, effective 2027–2028)—a fiscal instrument designed to lower the activation energy for SME adoption without distorting incentives toward vanity implementations.
Governance: The Most Underrated Layer of Sovereignty
Of all the layers, governance may be where Singapore’s sovereignty strategy is most original. The AI Verify testing framework and Project Moonshot—one of the world’s first LLM evaluation toolkits—represent Singapore’s bid to become a global standard-setter rather than a standard-taker in AI governance.
This matters strategically. Nations that can shape international AI norms wield influence disproportionate to their size. Singapore’s active participation in the Global Partnership on AI (GPAI), its US-Singapore Critical and Emerging Technology Dialogue, and its contributions to the UN High-Level Advisory Body on AI have established it as a trusted interlocutor across geopolitical divides—a position that larger powers, constrained by rivalry, cannot easily occupy.
The newly formed National AI Council, chaired by PM Wong himself and spanning six ministries plus private sector representatives, is designed to ensure that this whole-of-stack strategy is coordinated from the top. As Intracorp Asia noted: Singapore is aiming to make AI “a practical instrument of competitiveness, not a slogan.”
Comparative Lessons: Switzerland, Estonia, and the Limits of the Singapore Model
Singapore is not the only small state grappling intelligently with AI sovereignty. Switzerland has leveraged its neutrality and institutional quality to attract international AI governance bodies and frontier AI research (EPFL’s contributions to open-source AI are globally significant). Estonia, with its pioneering digital government infrastructure, has demonstrated that sovereignty in the application layer can be achieved independently of frontier model capabilities—its X-Road data exchange platform remains one of the most sophisticated sovereignty-preserving digital architectures in the world.
But Singapore’s approach has features that distinguish it from both. Unlike Switzerland, it is operating in a geopolitically contested neighborhood—ASEAN sits at the intersection of US-China strategic competition in ways that Europe does not. Unlike Estonia, it is an economic hub rather than a digital governance laboratory, which means its AI strategy must simultaneously serve commercial competitiveness, national security, and regional influence.
Singapore’s “balanced posture”—maintaining deep technology partnerships with American hyperscalers and defence partners while refusing to shut out Chinese technology firms entirely, and building Southeast Asian-specific capabilities that serve neither Washington nor Beijing’s AI agenda exclusively—is inherently fragile. It requires constant diplomatic management and a credibility that is earned, not inherited.
The risk, as geopolitical tensions intensify, is that this balance becomes harder to maintain. US export controls on advanced semiconductors, Chinese pressure on supply chains, and the broader de-globalization of AI infrastructure all create pressure on small states to pick sides. Singapore’s answer, at least for now, is to make itself too valuable as a neutral hub to be squeezed out entirely.
Economic and Geopolitical Implications: Agency Without Illusions
What does Singapore’s model mean in practice for its economic competitiveness and global influence?
On the economic side, the gains are potentially substantial. Singapore’s generative AI market is forecast to grow at over 46% annually through 2030, reaching US$5 billion. The NAIRD Plan’s investment in applied AI across nine priority sectors—from climate modelling to drug discovery—positions Singapore to capture high-value economic activities at the frontier of what AI can do. The AI Park at One-North, announced in Budget 2026, is designed as a physical ecosystem where startups, research institutions, and multinationals can co-develop applications—a model of deliberate clustering that Singapore has used successfully in biomedical sciences and fintech.
On the geopolitical side, Singapore’s influence will be felt most through standard-setting and norm entrepreneurship. If AI Verify and Project Moonshot achieve international adoption—particularly across ASEAN and the Global South, where governance capacity is weakest—Singapore will have shaped AI deployment practices for a significant portion of the world’s population. This is soft power of a meaningful kind: not projecting values through cultural influence, but building technical infrastructure that embeds particular governance choices.
The risks are real too. Concentration of AI infrastructure in the hands of a handful of global hyperscalers—most of them American—creates a form of dependency that no partnership agreement fully resolves. Singapore’s cloud compute partnerships come with terms of service, export compliance requirements, and geopolitical conditions that are ultimately set elsewhere. And the race to attract AI investment means competing with much larger jurisdictions—Saudi Arabia, the UAE, India—that can offer cheaper power, larger data markets, and, in some cases, fewer regulatory constraints.
Singapore’s edge in this competition is not scale; it is quality: of institutions, of rule of law, of talent density, and of the kind of trustworthiness that makes sensitive AI deployments in finance, healthcare, and government feel safe. That edge is real, but it requires constant investment to maintain.
Conclusion: Agency Over Autarky—A Model for the World
The New Delhi Declaration’s endorsement by 88 nations, including Singapore, reflects a genuine global desire for a different kind of AI future—one not defined purely by the strategic competition of the two superpowers. But declarations are not strategies. The gap between aspiring to AI sovereignty and achieving meaningful AI agency is where most nations will struggle.
Singapore’s approach suggests a more useful framework for small states confronting this challenge. The core insight is that sovereignty is not a binary condition—you either have it or you don’t—but a portfolio of strategic postures calibrated to each layer of the AI stack. You defend your sovereignty where the risks of dependency are highest (sensitive data, critical applications, governance norms). You embrace interdependence where the gains from collaboration outweigh the risks (frontier compute, foundation models, global research). And you invest relentlessly in the institutional quality that makes your choices credible to partners and rivals alike.
For policymakers in small and medium-sized economies—from Nairobi to Bogotá, from Tallinn to Kuala Lumpur—Singapore’s model offers not a blueprint to copy but a logic to adapt. The question is not whether your country can achieve AI self-sufficiency. It almost certainly cannot. The question is whether you have the institutional coherence, the diplomatic agility, and the strategic clarity to make AI work for you on your own terms.
That is what sovereignty actually requires. Not the biggest model. Not the most chips. But the wisdom to know which choices are yours to make, and the capacity to make them well.