Connect with us

AI

Small States, Big Choices: Singapore’s Approach to Sovereignty in the Age of AI

Published

on

How Singapore redefines AI sovereignty for small states—not as self-reliance, but as a spectrum of strategic postures across the AI stack.

When the world’s largest AI summit wrapped up in New Delhi last week, it produced the expected pageantry: 88 nations signing the New Delhi Declaration, heads of state taking photographs with Silicon Valley CEOs, and the familiar rhetoric about “democratizing AI.” Yet beneath the declarations, a far more candid conversation was unfolding in the corridors of Bharat Mandapam. As the TIME magazine observed, delegates from “middle powers” wrestled with an uncomfortable truth: the overwhelming majority of global AI compute, data, and frontier talent remains concentrated in the United States and China. For most nations, the gap between aspiration and capability is not just wide—it is structurally embedded.

Singapore, a signatory to the New Delhi Declaration and one of the summit’s quietly influential voices, understands this gap better than most. A city-state of 5.9 million people with no natural resources and a land area smaller than Los Angeles, Singapore has no plausible path to AI autarky. And yet, in the weeks surrounding the New Delhi summit, it unveiled one of the world’s most coherent national AI strategies—not by racing to build the biggest models or hoard the most chips, but by adopting a carefully differentiated set of postures across each layer of the AI stack.

This distinction matters enormously. For small, open economies navigating the age of AI, Singapore’s approach offers a template that is both intellectually serious and practically executable.

The Autarky Trap: Why the Sovereignty Debate Is Asking the Wrong Question

The concept of AI sovereignty has a seductive simplicity to it. Who owns the data? Who trains the models? Who controls the compute? In the mainstream framing—visible in the rhetoric of both Washington and Beijing—sovereignty is essentially synonymous with dominance. The nation that leads in AI leads the world.

This framing works reasonably well as geopolitical shorthand for the United States, which commands extraordinary concentrations of frontier AI infrastructure, and for China, which has matched that ambition with state-directed industrial policy on a massive scale. The EU, for its part, has staked its claim on regulatory sovereignty—shaping AI governance through the AI Act in ways that larger markets can afford to enforce. But for the vast majority of nations—including nearly all of Southeast Asia, the Middle East, Africa, and Latin America—the “race for self-reliance” framing is not merely unrealistic. It is actively misleading.

AI sovereignty, properly understood, is not a destination. It is a capacity: the ability of a state to make meaningful choices about how AI is developed, deployed, and governed within its borders and in its name. That capacity does not require building everything from scratch. It requires building in the right places, partnering wisely in others, and maintaining enough institutional coherence to keep choices in domestic hands.

Singapore’s National AI Strategy 2.0 (NAIS 2.0), launched in 2023 and now mid-implementation, offers what may be the clearest articulation of this alternative model in the world. Rather than pretending to compete with hyperscalers on their own terms, Singapore has asked a more precise question: where across the AI stack must we build sovereign capacity, and where can we safely depend on trusted partners?

Singapore’s Layered Strategy: Sovereignty Across the AI Stack

Understanding Singapore’s approach requires examining the AI stack not as a monolith but as a series of distinct layers—each with its own strategic logic, its own risk profile, and its own implications for sovereignty.

AI Stack LayerSingapore’s PostureKey Initiatives
ComputeSelective self-sufficiency + trusted partnershipsNAIRD Plan; GPU clusters at NUS/NTU; ECI cloud partnerships ($150M)
DataDomestic control with cross-border access frameworksPrivacy-Enhancing Technologies (PETs) R&D; unlocking government data
Foundation ModelsStrategic independence via niche capabilitySEA-LION multilingual LLM; international model collaboration
ApplicationsBroad deployment across key sectorsNational AI Missions in manufacturing, finance, healthcare, logistics
GovernanceGlobal standard-setting leadershipAI Verify toolkit; Project Moonshot; US-Singapore Critical Tech Dialogue

Compute: Selective Self-Sufficiency

Singapore is not trying to build a domestic semiconductor industry. That race belongs to Taiwan, South Korea, and increasingly the United States and China. What Singapore is doing is ensuring it maintains adequate sovereign compute capacity for research and government use—while securing deep partnerships with global cloud providers for everything else.

The S$1 billion National AI Research and Development (NAIRD) Plan, running from 2025 to 2030, includes dedicated GPU infrastructure operated for the Singapore research community. Alongside this, Computer Weekly reports that a $150 million Enterprise Compute Initiative facilitates SME access to cutting-edge cloud AI tools through trusted commercial partners. This is not autarky—it is calibrated dependency: maintaining sovereign research capacity while leveraging global infrastructure for commercial scale.

Prime Minister Lawrence Wong was direct about this posture in his Budget 2026 speech: “Our advantage does not lie in building the largest frontier models.” Singapore is instead focused on deploying AI faster and more coherently than larger countries—a form of competitive advantage that requires institutional strength rather than raw technological scale.

Data: Domestic Control, Global Connectivity

Data sovereignty is the layer where small states arguably have the most to gain and the most to lose. Singapore’s approach here is nuanced: it is investing heavily in Privacy-Enhancing Technologies (PETs) that allow data to be used for AI training without being exposed or transferred, while simultaneously advocating for trusted cross-border data flows as a global norm.

This dual posture reflects Singapore’s economic reality. As a financial, logistics, and biomedical hub, Singapore processes an extraordinary volume of sensitive data from across Asia and the world. Restricting data flows would damage its economic model. Failing to protect data sovereignty would expose it to the kind of dependency that compromises meaningful agency. PETs offer a potential third path—allowing participation in global AI ecosystems without surrendering control over the underlying information.

Models: Strategic Independence Through Niche Capability

Singapore is one of the few small states to have invested in developing its own large language model. The SEA-LION (South-East Asian Languages in One Network) model, developed through IMDA, addresses a critical gap: Southeast Asian languages are dramatically underrepresented in global foundation models trained primarily on English-language data. This is not merely a cultural concern—it has concrete consequences for healthcare AI, legal AI, and government services across the region.

SEA-LION represents a specific kind of sovereign capability: not competing with OpenAI or Google on frontier reasoning, but ensuring that AI applications serving Singapore and the broader region reflect local languages, contexts, and values. It is sovereignty by differentiation rather than by scale.

Applications: Depth Over Breadth

Budget 2026’s establishment of National AI Missions in four sectors—advanced manufacturing, connectivity and logistics, finance, and healthcare—signals a deliberate concentration of deployment effort. Rather than spreading AI adoption thinly across the entire economy, Singapore is betting on achieving genuine transformation in sectors where it has comparative advantage and where AI can address its most pressing structural challenges: a tight labour market and an ageing population.

The accompanying “Champions of AI” program offers enterprises 400% tax deductions on qualifying AI expenditures (capped at S$50,000, effective 2027–2028)—a fiscal instrument designed to lower the activation energy for SME adoption without distorting incentives toward vanity implementations.

Governance: The Most Underrated Layer of Sovereignty

Of all the layers, governance may be where Singapore’s sovereignty strategy is most original. The AI Verify testing framework and Project Moonshot—one of the world’s first LLM evaluation toolkits—represent Singapore’s bid to become a global standard-setter rather than a standard-taker in AI governance.

This matters strategically. Nations that can shape international AI norms wield influence disproportionate to their size. Singapore’s active participation in the Global Partnership on AI (GPAI), its US-Singapore Critical and Emerging Technology Dialogue, and its contributions to the UN High-Level Advisory Body on AI have established it as a trusted interlocutor across geopolitical divides—a position that larger powers, constrained by rivalry, cannot easily occupy.

The newly formed National AI Council, chaired by PM Wong himself and spanning six ministries plus private sector representatives, is designed to ensure that this whole-of-stack strategy is coordinated from the top. As Intracorp Asia noted: Singapore is aiming to make AI “a practical instrument of competitiveness, not a slogan.”

Comparative Lessons: Switzerland, Estonia, and the Limits of the Singapore Model

Singapore is not the only small state grappling intelligently with AI sovereignty. Switzerland has leveraged its neutrality and institutional quality to attract international AI governance bodies and frontier AI research (EPFL’s contributions to open-source AI are globally significant). Estonia, with its pioneering digital government infrastructure, has demonstrated that sovereignty in the application layer can be achieved independently of frontier model capabilities—its X-Road data exchange platform remains one of the most sophisticated sovereignty-preserving digital architectures in the world.

But Singapore’s approach has features that distinguish it from both. Unlike Switzerland, it is operating in a geopolitically contested neighborhood—ASEAN sits at the intersection of US-China strategic competition in ways that Europe does not. Unlike Estonia, it is an economic hub rather than a digital governance laboratory, which means its AI strategy must simultaneously serve commercial competitiveness, national security, and regional influence.

Singapore’s “balanced posture”—maintaining deep technology partnerships with American hyperscalers and defence partners while refusing to shut out Chinese technology firms entirely, and building Southeast Asian-specific capabilities that serve neither Washington nor Beijing’s AI agenda exclusively—is inherently fragile. It requires constant diplomatic management and a credibility that is earned, not inherited.

The risk, as geopolitical tensions intensify, is that this balance becomes harder to maintain. US export controls on advanced semiconductors, Chinese pressure on supply chains, and the broader de-globalization of AI infrastructure all create pressure on small states to pick sides. Singapore’s answer, at least for now, is to make itself too valuable as a neutral hub to be squeezed out entirely.

Economic and Geopolitical Implications: Agency Without Illusions

What does Singapore’s model mean in practice for its economic competitiveness and global influence?

On the economic side, the gains are potentially substantial. Singapore’s generative AI market is forecast to grow at over 46% annually through 2030, reaching US$5 billion. The NAIRD Plan’s investment in applied AI across nine priority sectors—from climate modelling to drug discovery—positions Singapore to capture high-value economic activities at the frontier of what AI can do. The AI Park at One-North, announced in Budget 2026, is designed as a physical ecosystem where startups, research institutions, and multinationals can co-develop applications—a model of deliberate clustering that Singapore has used successfully in biomedical sciences and fintech.

On the geopolitical side, Singapore’s influence will be felt most through standard-setting and norm entrepreneurship. If AI Verify and Project Moonshot achieve international adoption—particularly across ASEAN and the Global South, where governance capacity is weakest—Singapore will have shaped AI deployment practices for a significant portion of the world’s population. This is soft power of a meaningful kind: not projecting values through cultural influence, but building technical infrastructure that embeds particular governance choices.

The risks are real too. Concentration of AI infrastructure in the hands of a handful of global hyperscalers—most of them American—creates a form of dependency that no partnership agreement fully resolves. Singapore’s cloud compute partnerships come with terms of service, export compliance requirements, and geopolitical conditions that are ultimately set elsewhere. And the race to attract AI investment means competing with much larger jurisdictions—Saudi Arabia, the UAE, India—that can offer cheaper power, larger data markets, and, in some cases, fewer regulatory constraints.

Singapore’s edge in this competition is not scale; it is quality: of institutions, of rule of law, of talent density, and of the kind of trustworthiness that makes sensitive AI deployments in finance, healthcare, and government feel safe. That edge is real, but it requires constant investment to maintain.

Conclusion: Agency Over Autarky—A Model for the World

The New Delhi Declaration’s endorsement by 88 nations, including Singapore, reflects a genuine global desire for a different kind of AI future—one not defined purely by the strategic competition of the two superpowers. But declarations are not strategies. The gap between aspiring to AI sovereignty and achieving meaningful AI agency is where most nations will struggle.

Singapore’s approach suggests a more useful framework for small states confronting this challenge. The core insight is that sovereignty is not a binary condition—you either have it or you don’t—but a portfolio of strategic postures calibrated to each layer of the AI stack. You defend your sovereignty where the risks of dependency are highest (sensitive data, critical applications, governance norms). You embrace interdependence where the gains from collaboration outweigh the risks (frontier compute, foundation models, global research). And you invest relentlessly in the institutional quality that makes your choices credible to partners and rivals alike.

For policymakers in small and medium-sized economies—from Nairobi to Bogotá, from Tallinn to Kuala Lumpur—Singapore’s model offers not a blueprint to copy but a logic to adapt. The question is not whether your country can achieve AI self-sufficiency. It almost certainly cannot. The question is whether you have the institutional coherence, the diplomatic agility, and the strategic clarity to make AI work for you on your own terms.

That is what sovereignty actually requires. Not the biggest model. Not the most chips. But the wisdom to know which choices are yours to make, and the capacity to make them well.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading
Click to comment

Leave a Reply

AI

China’s Cheap AI Is Designed to Hook the World on Its Tech

Published

on

Analysis | China’s AI Strategy | Global Technology Review

How China’s low-cost AI models—10 to 20 times cheaper than US equivalents—are quietly building global tech dependence, reshaping the AI race, and challenging American dominance.

In late February 2026, ByteDance unveiled Seedance 2.0, a video-generation model so capable—and so strikingly inexpensive—that it sent tremors through Silicon Valley boardrooms. The timing was no accident. Within days, Anthropic filed a legal complaint alleging that a Chinese national had systematically harvested outputs from Claude to train a rival model, a practice known in the industry as “distillation.” The accusation crystallized what many AI executives had quietly been saying for months: China is not simply competing in artificial intelligence. It is running a fundamentally different play.

The strategy is elegant in its ruthlessness. While American frontier labs—OpenAI, Google DeepMind, Anthropic—compete on the technological frontier, racing to build the most powerful and most expensive models imaginable, China’s leading AI developers are racing in the opposite direction. They are making AI astonishingly cheap, broadly accessible, and deeply entangled in the infrastructure of developing economies. Understanding how cheap AI tools from China compare to American frontier models is not merely a technology question. It is a question about who writes the rules of the next era of the global economy.

MetricFigure
Chinese AI global market share, late 202515% (up from 1% in 2023)
Cost advantage vs. US equivalentsUp to 20× cheaper
Alibaba AI investment commitment through 2027$53 billion

The Sputnik Moment That Changed Everything

When DeepSeek released its R1 reasoning model in January 2025, the reaction in Washington was somewhere between bewilderment and alarm. US officials, accustomed to treating American AI supremacy as a structural given, struggled to explain how a Chinese startup—operating under heavy export restrictions that denied it access to Nvidia’s most advanced chips—had produced a model that matched, or in certain benchmarks exceeded, OpenAI’s o1. Reuters (2025) described the release as “a wake-up call for the US tech industry.”

The label that stuck was borrowed from Cold War history. Investors, policymakers, and researchers began calling DeepSeek’s R1 “a Sputnik moment”—a demonstration that the adversary had capabilities that had been systematically underestimated. The reaction was visceral: Nvidia lost nearly $600 billion in market capitalization in a single trading session. But the deeper implication was not about one model or one company. It was about a method.

“The real disruption isn’t that China built a good model. It’s that China built a cheap model—and cheap changes everything about adoption curves, lock-in, and geopolitical leverage.”

— Senior analyst, Brookings Institution Center for Technology Innovation

DeepSeek’s R1 was trained at an estimated cost of under $6 million, a fraction of what OpenAI reportedly spent on GPT-4. The model was open-sourced, triggering an avalanche of derivative models across Southeast Asia, Latin America, and sub-Saharan Africa. The impact of low-cost Chinese AI on US dominance had moved from hypothetical to measurable. By the fourth quarter of 2025, Chinese AI models had captured approximately 15% of global market share, up from roughly 1% just two years earlier, according to estimates cited by CNBC (2025).

Five Models and Counting: The Pace Accelerates

DeepSeek was only the opening act. Within weeks, five additional significant Chinese AI models had shipped—a pace that surprised even close observers of China’s technology sector. ByteDance’s Doubao and the Seedance family of multimodal models, Alibaba’s Qwen series, Baidu’s ERNIE updates, and Tencent’s Hunyuan collectively constitute what The Economist (2025) termed China’s “AI tigers.”

American labs have pushed back hard. Anthropic’s legal complaint over distillation practices reflects a broader industry concern: that Chinese developers are not merely competing on engineering talent but systematically harvesting the intellectual output of Western models to accelerate their own. The accusation is significant because distillation—training a smaller, cheaper model on the outputs of a larger one—is not illegal in most jurisdictions, but it sits in a legal and ethical gray zone that could reshape how frontier AI outputs are licensed and protected. Chatham House (2025) has observed that the practice “blurs the line between legitimate benchmarking and intellectual property extraction at scale.”

UBS Picks Its Winners

Not all Chinese models are created equal, and sophisticated institutional actors are drawing distinctions. Analysts at UBS, in a widely circulated note from early 2026, indicated a preference for several Chinese models—specifically Alibaba’s Qwen and ByteDance’s Doubao—over DeepSeek for enterprise deployments, citing more consistent performance on structured reasoning tasks and better compliance tooling for regulated industries. The note was striking precisely because it came from a global financial institution with every incentive to avoid geopolitical controversy. The risks of dependence on Chinese AI platforms, apparently, are acceptable to some of the world’s most sophisticated institutional investors when the price differential is this large.

Key Strategic Insights

  • China’s cost advantage is structural, not temporary. Priced 10 to 20 times cheaper per API call, the gap reflects architectural innovation, lower energy costs, and in some cases state subsidy—making it durable over time.
  • Emerging markets are the primary battleground. In Indonesia, Nigeria, Brazil, and Vietnam, Chinese AI tools have penetrated developer ecosystems faster than US equivalents because local startups and governments simply cannot afford American pricing.
  • Open-sourcing is a deliberate geopolitical instrument. By releasing models under permissive licenses, Chinese developers seed global ecosystems with their architectures, creating dependency on Chinese tooling, Chinese fine-tuning expertise, and Chinese cloud infrastructure.
  • The distillation controversy signals a new phase. As US labs tighten access and output monitoring, the cat-and-mouse dynamics of knowledge extraction will intensify, potentially reshaping how AI models are licensed globally.
  • Hardware self-reliance is advancing faster than anticipated. Cambricon’s revenue surged over 200% in 2025 as domestic chip demand spiked, while Baidu’s Kunlun AI chips are now deployed across major Chinese data centers at scale.

The Comparison Table: US vs. Chinese AI

ModelOriginRelative API CostGlobal Reach StrategyOpen Source?Hardware Dependency
OpenAI GPT-4o🇺🇸 USBaseline (1×)Enterprise, developer API; premium pricingNoNvidia (Azure)
Anthropic Claude 3.5🇺🇸 US~0.9×Safety-focused enterprise; selective accessNoNvidia (AWS, GCP)
Google Gemini Ultra🇺🇸 US~0.85×Google ecosystem integration; enterprise cloudPartial (Gemma)Google TPUs
DeepSeek R1🇨🇳 CN~0.05–0.10×Global open-source seeding; developer ecosystemsYesNvidia H800 / domestic chips
Alibaba Qwen 2.5🇨🇳 CN~0.07×Emerging markets via Alibaba Cloud; multilingualYesAlibaba custom silicon
ByteDance Doubao / Seedance🇨🇳 CN~0.06×Consumer apps; TikTok ecosystem integrationPartialMixed (domestic + Nvidia)
Baidu ERNIE 4.0🇨🇳 CN~0.08×Government contracts; domestic enterpriseNoBaidu Kunlun chips

Winning the Hardware War From Behind

No analysis of how China’s cheap AI is creating global tech dependence is complete without confronting the chip question. The Biden and Trump administrations’ export controls—restricting Nvidia’s H100, A100, and subsequent architectures from reaching Chinese buyers—were designed to create a permanent computational ceiling. The assumption was that frontier AI requires frontier silicon, and frontier silicon would remain American. That assumption is under sustained pressure.

Huawei’s Atlas 950 AI training cluster, unveiled in late 2025, represents the most credible challenge yet to Nvidia’s dominance in the Chinese market. Built around Huawei’s Ascend 910C processor, the cluster offers training performance that analysts at the Financial Times (2025) described as “approaching, though not yet matching, Nvidia’s H100 at scale.” More telling is the trajectory. Cambricon Technologies, China’s leading AI chip specialist, reported revenue growth exceeding 200% in fiscal 2025 as domestic AI developers pivoted aggressively to domestic silicon under regulatory pressure and patriotic procurement directives.

Baidu’s Kunlun chip line, meanwhile, is now powering a significant share of the company’s own inference workloads—reducing dependence on imported hardware at the exact moment when US export restrictions are tightening. China’s AI strategy for becoming an economic superpower is not predicated on surpassing American chip technology in the near term. It is predicated on becoming self-sufficient enough to sustain its cost advantage while US competitors remain anchored to expensive, constrained silicon supply chains. Brookings (2025) has noted that “China’s domestic chip ecosystem has advanced by at least two to three years relative to projections made in 2022.”

The Emerging Market Gambit

Silicon Valley’s pricing model was always implicitly designed for Silicon Valley’s clients: well-capitalized Western enterprises with robust cloud budgets and tolerance for compliance complexity. The rest of the world—which is to say, most of the world—was an afterthought. Chinese AI developers recognized this gap and moved into it with precision.

In Vietnam, government agencies have begun piloting Alibaba’s Qwen models for document processing and citizen services, drawn by price points that make comparable US offerings economically untenable for a developing-economy public sector. In Nigeria, startup accelerators report that the majority of AI-native companies in their cohorts are building on Chinese model APIs—not out of ideological preference but because the economics are simply not comparable. Indonesian developers have contributed tens of thousands of fine-tuned model variants to open-source repositories built on DeepSeek and Qwen foundations, creating exactly the kind of community lock-in that platform companies spend billions trying to manufacture.

The implications for tech sovereignty are profound and troubling. As Chatham House (2025) argues, when a country’s critical AI infrastructure is built on a foreign model’s weights, architecture, and increasingly its cloud services, the notion of digital sovereignty becomes largely theoretical. Data flows toward Chinese servers. Fine-tuning expertise clusters around Chinese tooling ecosystems. Regulatory leverage accrues to Beijing.

“Ubiquity is more powerful than superiority. The question is not which AI is best—it is which AI is everywhere.”

Stanford HAI, AI Index Report 2025

Alibaba’s $53 Billion Signal

If there was any residual doubt about the strategic ambition behind China’s AI push, Alibaba’s announcement of a $53 billion AI investment commitment through 2027 should have resolved it. The scale dwarfs most national AI strategies and rivals the combined R&D budgets of several major US technology companies. Critically, the investment is not concentrated in a single prestige project. It is spread across cloud infrastructure, model development, developer tooling, international data centers, and—pointedly—subsidized access programs for emerging-market customers.

This is the architecture of dependency, built deliberately. Offer cheap access. Embed your tools in critical workflows. Build the developer community on your frameworks. Then, when the switching costs are high enough and the alternatives have atrophied from neglect, the pricing conversation changes. It is the playbook that Amazon ran with AWS, that Google ran with Search, and that Microsoft ran with Office—now being executed at geopolitical scale by a state-aligned corporate champion with essentially unlimited political backing. Forbes (2025) characterized the investment as “less a corporate bet than a national infrastructure program wearing a corporate uniform.”

Is China Winning the AI Race?

The question is, in one sense, the wrong question. “Winning” implies a finish line, a moment when one competitor’s supremacy is declared and ratified. Technological competition does not work that way, and the AI race least of all. What China is doing is more subtle and, in the long run, potentially more consequential: it is restructuring the terms of global AI participation in ways that favor Chinese platforms, Chinese architectures, and Chinese geopolitical interests.

On pure technical capability, American frontier labs retain meaningful advantages at the absolute cutting edge. OpenAI’s reasoning models, Google’s multimodal systems, and Anthropic’s safety-focused architectures represent genuine innovations that Chinese competitors are still working to match. The New York Times (2025) noted that US models continue to lead on complex multi-step reasoning and long-context tasks by measurable margins. But capability at the frontier matters far less than capability at the median—at the price point, integration depth, and ecosystem richness that determine what the world actually uses.

China is winning that race. Not through theft or brute force, though allegations of distillation practices suggest the competitive lines are not always clean, but through a coherent, patient, and strategically sophisticated campaign to make Chinese AI the default choice for a world that cannot afford American alternatives. The risks of dependence on Chinese AI platforms—data sovereignty concerns, potential for access interruption under geopolitical pressure, embedded architectural assumptions that may encode specific values—are real and documented. They are also, increasingly, being accepted as the price of access by a world that Western AI pricing has effectively priced out.

History suggests that the technology that becomes ubiquitous becomes infrastructure, and infrastructure becomes power. China’s AI developers have understood this clearly. The rest of the world is just beginning to reckon with what it means.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

AI

What a Chocolate Company Can Tell Us About OpenAI’s Risks: Hershey’s Legacy and the AI Giant’s Charitable Gamble

Published

on

The parallels between Milton Hershey’s century-old trust and OpenAI’s restructuring reveal uncomfortable truths about power, philanthropy, and the future of artificial intelligence governance.

In 2002, the board of the Hershey Trust quietly floated a plan that would have upended a century of carefully constructed philanthropy. They proposed selling the Hershey Company—the chocolate empire—to Wrigley or Nestlé for somewhere north of $12 billion. The proceeds would have theoretically enriched the Milton Hershey School, the boarding school for low-income children that the company’s founder had dedicated his fortune to sustaining. It was, on paper, an act of fiscal prudence. In practice, it was a near-catastrophe—one that Pennsylvania’s attorney general halted amid public outcry, conflict-of-interest investigations, and the uncomfortable revelation that some trust board members had rather too many ties to the acquiring parties.

The deal collapsed. But the architecture that made such a maneuver possible—a charitable trust wielding near-absolute voting control over a publicly traded company, insulated from traditional accountability structures—never changed.

Fast forward two decades, and a strikingly similar structure is taking shape at the frontier of artificial intelligence. OpenAI’s 2025 restructuring into a Public Benefit Corporation, with a newly formed OpenAI Foundation holding approximately 26% of equity in a company now valued at roughly $130 billion, has drawn comparisons from governance scholars, philanthropic historians, and antitrust economists alike. The OpenAI Hershey structure comparison is not merely rhetorical—it is, structurally and legally, one of the most instructive precedents available to anyone trying to understand where this gamble leads.

The Hershey Precedent: A Century of Sweet Success and Bitter Disputes

Milton Hershey was not a villain. He was, by most accounts, a genuinely idealistic industrialist who built a company town in rural Pennsylvania, provided workers with housing, schools, and parks, and then—with no children of his own—donated the bulk of his fortune to a trust that would fund the Milton Hershey School in perpetuity. When he died in 1945, the trust he established owned the majority of Hershey Foods Corporation stock. That arrangement was grandfathered under the 1969 Tax Reform Act, which capped charitable foundation holdings in for-profit companies at 20% for new entities—but allowed existing arrangements to stand.

The result, still operative today: the Hershey Trust controls roughly 80% of Hershey’s voting power while holding approximately $23 billion in assets. It is one of the most concentrated governance arrangements in American corporate history. And it has produced, over the decades, a remarkable catalogue of governance pathologies—self-perpetuating boards, lavish trustee compensation, conflicts of interest, and the periodic temptation to treat a $23 billion asset base as something other than a charitable instrument.

The 2002 sale attempt was the most dramatic episode, but hardly the only one. Pennsylvania’s attorney general has intervened repeatedly. A 2016 investigation found board members had approved millions in questionable real estate transactions. Trustees have cycled in and out amid ethics violations. And yet the fundamental structure—concentrated voting control in a charitable entity, largely exempt from the market discipline that shapes ordinary corporations—persists.

This is the template against which OpenAI’s new architecture deserves to be measured.

OpenAI’s Charitable Gamble: Anatomy of the New Structure

When Sam Altman and the OpenAI board announced the company’s transition to a capped-profit and then Public Benefit Corporation model, they framed it as a solution to a genuine tension: how do you raise the capital required to develop artificial general intelligence—measured in the tens of billions—while maintaining a mission ostensibly oriented toward humanity rather than shareholders?

The answer they arrived at is, structurally, closer to Hershey than to Google. Under the restructured arrangement, the OpenAI Foundation holds approximately 26% equity in OpenAI PBC at the company’s current ~$130 billion valuation—making it, by asset size, larger than the Gates Foundation, which manages roughly $70 billion. Microsoft retains approximately 27% equity. Altman and employees hold the remainder under various compensation and vesting structures.

The Foundation’s stated mandate is to direct resources toward health, education, and AI resilience philanthropy—a mission broad enough to accommodate almost any expenditure. Crucially, as California Attorney General Rob Bonta’s 2025 concessions made clear, the restructuring required commitments around safety and asset protection, but the precise mechanisms for enforcing those commitments remain opaque. Bonta’s office won language requiring that charitable assets not be diverted for commercial benefit—a standard that sounds robust until you consider how difficult it is to operationalize when the “charitable” entity is the commercial enterprise.

The OpenAI charitable risks embedded in this structure are not hypothetical. They are legible from history.

The Governance Gap: Where Philanthropy Ends and Power Begins

FeatureHershey TrustOpenAI Foundation
Equity stake~80% voting control~26% equity (~$34B)
Total assets~$23B~$34B (at current valuation)
Regulatory exemption1969 Tax Reform Act grandfatheredCalifornia AG concessions (2025)
Oversight bodyPennsylvania AGCalifornia AG + FTC (emerging)
Primary beneficiaryMilton Hershey SchoolHealth, education, AI resilience
Board independenceRecurring conflicts of interestOverlapping board memberships
Market accountabilityPartial (listed company)Limited (PBC structure)

The comparison table above reveals a foundational asymmetry. Hershey, for all its governance problems, operates within a framework where the underlying company is publicly listed, analysts scrutinize quarterly earnings, and the attorney general of Pennsylvania has decades of institutional practice monitoring the trust. OpenAI is a private company. Its Foundation’s equity is illiquid. Its valuation is determined by private funding rounds, not public markets. And the regulatory apparatus designed to oversee it is, bluntly, improvising.

Critics have been vocal. The Midas Project, a nonprofit focused on AI accountability, has argued that the AI governance nonprofit model OpenAI has constructed creates precisely the conditions for what they term “mission drift under incentive pressure”—a dynamic where the commercial imperatives of a $130 billion company gradually subordinate the charitable mandate of its controlling foundation. This is not speculation; it is the documented history of every large charitable trust that has ever governed a commercially valuable enterprise.

Bret Taylor, OpenAI’s board chair, has offered the counter-argument: that the Foundation structure provides a durable check against pure profit maximization, creating legally enforceable obligations that a traditional corporation could simply disclaim. In an era where AI companies face pressure to ship products faster than safety research can validate them, Taylor argues, structural constraints matter.

Both positions contain truth. The question is which force—structural obligation or commercial gravity—proves stronger over the decade ahead.

Economic Modeling the Downside: The $250 Billion Question

What does it actually cost if the charitable mission is subordinated to commercial interests? The figure is not immaterial.

The OpenAI foundation equity stake, at current valuation, represents approximately $34 billion in charitable assets. If OpenAI achieves the kind of transformative commercial success its investors are pricing in—scenarios in which AGI-adjacent systems generate trillions in economic value—the Foundation’s stake could appreciate dramatically. Some economists modeling AI’s macroeconomic impact have suggested transformative AI could contribute $15-25 trillion to global GDP by 2035. Even a modest fraction of that value flowing through a properly governed charitable structure would represent an unprecedented philanthropic resource.

But the Hershey precedent suggests the gap between potential and realized charitable value can be enormous. Scholars at HistPhil.org, who have tracked the OpenAI Hershey structure comparison in detail, estimate that governance failures at large charitable trusts have historically diverted between 15-40% of potential charitable value toward administrative costs, trustee enrichment, and mission-misaligned expenditure. Applied to OpenAI’s trajectory, that range implies a potential public value loss exceeding $250 billion over a 20-year horizon—larger than the annual GDP of many mid-sized economies.

This is why the regulatory dimension matters so profoundly.

The Regulatory Frontier: U.S. vs. EU Approaches to AI Charity

American nonprofit law was not designed for entities like OpenAI. The legal scaffolding governing charitable trusts—built incrementally from the 1969 Tax Reform Act through various state attorney general statutes—assumes a relatively stable enterprise with predictable revenue streams and defined charitable outputs. OpenAI is none of these things. It operates at the intersection of defense contracting, consumer software, and scientific research, in a market where the underlying technology is evolving faster than any regulatory framework can track.

The European Union’s approach, by contrast, builds AI governance into product and deployment regulation rather than entity structure. The EU AI Act, fully operative by 2026, imposes obligations on AI systems regardless of the corporate form of their developers. A Public Benefit Corporation operating in Europe faces the same high-risk AI obligations as a shareholder-maximizing competitor. This structural neutrality has advantages: it prevents regulatory arbitrage where companies adopt charitable structures primarily to access regulatory goodwill.

The divergence creates a genuine cross-border governance problem. A company structured to satisfy California’s attorney general may simultaneously face EU compliance requirements that presuppose entirely different accountability mechanisms. For international researchers tracking AI philanthropy challenges and AGI public interest governance, this regulatory patchwork is arguably the most consequential design problem of the next decade.

What History’s Verdict on Hershey Actually Says

It would be unfair—and inaccurate—to characterize the Hershey Trust as a failure. The Milton Hershey School today serves approximately 2,200 students annually, providing free education, housing, and healthcare to children from low-income families. That outcome is real, durable, and directly attributable to the trust structure Milton Hershey designed. The governance pathologies that have periodically afflicted the trust have not, ultimately, destroyed its mission.

But this is precisely the danger of using Hershey as a template for optimism. The trust survived its governance crises because Pennsylvania’s attorney general had clear jurisdictional authority, because the Hershey Company’s public listing created external accountability, and because the charitable mission was concrete enough to defend in court. Educating low-income children is an unambiguous charitable purpose. “Ensuring that artificial general intelligence benefits all of humanity” is not.

The vagueness of OpenAI’s charitable mandate is a feature to its architects—it provides flexibility to pursue the company’s evolving commercial and research agenda under a philanthropic umbrella. To governance scholars, it is a vulnerability. Vague mandates are harder to enforce, easier to reinterpret, and more susceptible to capture by the very commercial interests they nominally constrain. As Vox’s analysis of the nonprofit-to-PBC transition noted, the devil is almost always in the enforcement mechanism, not the stated mission.

The Forward View: What Investors and Policymakers Must Demand

The public benefit corporation risks embedded in OpenAI’s structure are not an argument against the structure’s existence. They are an argument for the kind of rigorous, institutionalized oversight that the structure currently lacks.

What would adequate governance look like? At minimum, it would require independent audit of the Foundation’s charitable expenditures by bodies with no commercial relationship to OpenAI. It would require clear, justiciable standards for what constitutes mission-aligned versus mission-diverting Foundation activity. It would require mandatory disclosure of board member relationships—commercial, financial, and social—with OpenAI PBC. And it would require international coordination between U.S. state attorneys general and EU regulatory bodies to prevent jurisdictional arbitrage.

None of these mechanisms currently exist in robust form. The California AG’s 2025 concessions are a beginning, not an architecture.

For AI investors, the governance question is increasingly a financial one. Companies operating under poorly structured philanthropic control have historically underperformed market expectations when governance conflicts surface—as Hershey’s periodic crises have demonstrated. For policymakers in Washington, Brussels, and beyond, the OpenAI model represents either a template for responsible AI development or a cautionary tale in the making. Which it becomes depends almost entirely on decisions made in the next three to five years, before the company’s commercial scale makes course correction prohibitively difficult.

Milton Hershey built something remarkable and something flawed in the same gesture. A century later, those flaws are still being litigated. The architects of OpenAI’s charitable gamble would do well to study that inheritance—not for reassurance, but for warning.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

Analysis

Jeff Bezos’s $30 Billion AI Startup Is Quietly Buying the Industrial World

Published

on

Jeff Bezos’s Project Prometheus raised $6.2B at a $30B valuation and now seeks tens of billions more to acquire AI-disrupted manufacturers. Here’s why it matters.

It started, as the most consequential stories often do, not with a press release but with a whisper. In late 2025, word quietly leaked from Silicon Valley’s most guarded corridors that Jeff Bezos—the man who once upended retail, logistics, and cloud computing—had quietly incubated a new venture so ambitious it made Amazon look like a pilot project. Its name: Project Prometheus. Its mission: to buy the industrial companies that artificial intelligence is destroying, and rebuild them from the inside out.

Now, as of February 2026, that whisper has become a roar. The startup—already valued at $30 billion after raising $6.2 billion in a landmark late-2025 funding round—is in active talks with Abu Dhabi sovereign wealth funds and JPMorgan Chase to raise what sources familiar with the negotiations describe as “tens of billions” more. The purpose? A systematic, large-scale acquisition of companies across manufacturing, aerospace, computers, and automobiles that have been destabilized by the AI revolution they didn’t see coming.

This is not just another tech story. This is a story about who owns the future of physical labor, industrial infrastructure, and the global supply chain.


What Exactly Is Project Prometheus?

When The New York Times first revealed the existence of Project Prometheus, the details were sparse but electric: a Bezos-backed venture targeting the physical economy with AI tools designed not for screens, but for factory floors, jet engines, and automotive assembly lines.

What has since emerged paints a far more detailed picture. At its operational core, Project Prometheus is structured as a “manufacturing transformation vehicle”—an entity that combines private equity acquisition logic with frontier AI deployment capabilities. Unlike a traditional buyout firm, it doesn’t merely acquire distressed assets and optimize balance sheets. It embeds AI systems directly into a target company’s engineering and production processes, aiming to extract efficiencies, automate key workflows, and reposition legacy industrial players as AI-native competitors.

Leading the venture alongside Bezos is Vikram Bajaj, who serves as co-CEO—a pairing that blends Bezos’s unmatched capital-deployment instincts with Bajaj’s deep background in applied engineering and operational transformation. As reported by the Financial Times, the startup’s talent pipeline reflects its ambitions: engineers and researchers have been systematically recruited from Meta’s AI division, OpenAI, and DeepMind, assembling what insiders describe as one of the most concentrated collections of applied AI talent operating outside the established big-tech ecosystem.

The company has also made notable acquisitions in the AI tooling space. Wired reported on the acquisition of General Agents, a startup specializing in autonomous AI agents capable of executing complex, multi-step industrial tasks—a signal that Project Prometheus intends to bring genuine autonomous decision-making to the physical world, not just the digital one.

The AI Disruption Dividend: Why Industrial Companies Are Vulnerable

To understand what Bezos is buying, you have to understand what’s being broken.

The last five years have seen artificial intelligence move from a back-office efficiency tool to an existential competitive variable in physical industry. Companies in aerospace manufacturing, precision engineering, automobile production, and industrial computing now face a brutal paradox: the AI tools that could modernize their operations require capital expenditures, talent, and organizational transformation that most incumbents—many saddled with legacy cost structures and aging workforces—simply cannot self-fund at the speed the market demands.

The result is a growing class of what economists are beginning to call “AI-disrupted industrials”: fundamentally sound companies with valuable physical assets, established customer relationships, and critical supply chain positions, but lacking the technological agility to compete in an AI-accelerated market. Their valuations have compressed. Their boards are anxious. Their options are narrowing.

This is precisely the window Project Prometheus is engineered to exploit.

By pairing frontier AI capabilities with the kind of patient, large-scale capital that only sovereign wealth funds and bulge-bracket banks can mobilize, the venture is positioned to do something no traditional private equity firm or pure-play AI startup can do alone: acquire struggling industrials at distressed valuations, deploy AI at scale within their operations, and capture the resulting productivity gains as equity upside.

It is, in essence, an arbitrage strategy—buying the gap between what these companies are worth today and what they could be worth tomorrow, if only someone with the right tools and checkbook showed up.

The Capital Stack: Abu Dhabi, JPMorgan, and the New Industrial Finance

The involvement of Abu Dhabi sovereign wealth funds in Project Prometheus’s next capital raise is significant beyond the dollar amounts involved. It signals a broader geopolitical and economic alignment: Gulf states, flush with hydrocarbon revenues and acutely aware of the need to diversify into productive assets before the energy transition accelerates, are increasingly willing to bet on AI-driven industrial transformation as a long-duration investment theme.

For Abu Dhabi’s wealth funds—which have historically favored real assets, infrastructure, and established financial instruments—backing a Bezos-led AI acquisition vehicle represents a meaningful strategic pivot. It suggests that sovereign capital is beginning to treat “AI for physical economy” as infrastructure-class investment, not speculative technology.

JPMorgan Chase’s participation in structuring and potentially participating in the raise adds another layer of institutional credibility. The bank’s involvement suggests that the deal architecture being contemplated likely includes complex leveraged financing structures—potentially combining equity from sovereign and institutional investors with debt facilities secured against the industrial assets to be acquired. This kind of blended capital stack could meaningfully amplify the acquisition firepower available to Project Prometheus, potentially enabling a portfolio of acquisitions that, in aggregate, dwarfs what the equity raise alone would support.

The arithmetic becomes staggering quickly. If Project Prometheus raises $50 billion in equity and deploys 2:1 leverage across its acquisitions, it would command over $150 billion in total deal capacity—enough to acquire several mid-to-large industrial conglomerates simultaneously.

How Jeff Bezos Is Using AI to Reshape Manufacturing

To appreciate the operational model, consider a hypothetical that closely tracks what Project Prometheus appears to be building in practice.

Imagine a mid-sized aerospace components manufacturer—say, a Tier 2 supplier of precision-machined parts for commercial aviation. Pre-AI, the company’s competitive advantage rested on engineering expertise, tooling investments, and long-term customer contracts. Post-AI, those same advantages are being eroded: AI-assisted design tools are enabling competitors to produce comparable parts faster; generative manufacturing software is reducing the engineering labor content of each job; and autonomous quality inspection systems are compressing the time-to-market for new components.

Our hypothetical manufacturer, unable to afford the $200 million AI transformation program its consultants have outlined, watches its margins compress and its customer retention weaken. Its stock price—or private valuation—falls to reflect the uncertainty.

Project Prometheus acquires it. Within 18 months, the venture deploys a suite of AI tools—autonomous agents managing production scheduling, machine-learning models optimizing materials procurement, computer vision systems conducting real-time quality assurance—that would have taken the company a decade to develop independently. The manufacturer’s cost structure improves materially. Its capacity utilization rises. Its customer retention stabilizes.

This is industrial AI arbitrage at institutional scale. And if it works—if Bezos and Bajaj have correctly identified both the depth of industrial AI disruption and the transformative potential of their AI toolkit—the returns could be extraordinary.

The Ripple Effects: Supply Chains, Labor Markets, and the Ethics of AI-Driven Consolidation

No analysis of Project Prometheus would be complete without examining the broader economic consequences of what it proposes to do.

On global supply chains: The systematic AI-transformation of manufacturing companies across sectors could fundamentally alter cost structures and competitive dynamics in global supply chains. If AI-transformed industrials can produce goods more cheaply and reliably than their non-transformed competitors, the resulting competitive pressure will accelerate consolidation across entire manufacturing sectors. The geographic implications are significant: lower-cost-labor countries that have historically competed on wage arbitrage may find that cost advantage eroded if AI enables comparable productivity at higher-wage locations.

On labor markets: The question of what happens to workers at AI-transformed industrial companies is both urgent and contested. Proponents argue that AI augments rather than replaces workers, enabling human employees to focus on higher-value tasks while AI handles repetitive processes. Skeptics—including economists at institutions like MIT’s Work of the Future task force—argue that the productivity gains from industrial AI will, in practice, translate into workforce reduction at the companies where it is deployed, at least in the medium term. Project Prometheus’s acquisition model will inevitably surface this tension in concrete, visible ways.

On competitive ethics and market power: There is a harder question lurking beneath the capital raises and talent hires. If a single Bezos-backed vehicle acquires a significant swath of AI-disrupted industrial companies across sectors, it will accumulate substantial market power across multiple industries simultaneously. Antitrust regulators in the United States, European Union, and elsewhere are already scrutinizing big tech’s expansion into adjacent markets. The question of whether an AI-powered industrial conglomerate assembled through distressed acquisitions raises similar concentration concerns will inevitably reach regulators’ desks.

The Prometheus Paradox: Disrupting the Disruptor

There is an elegant and slightly unsettling irony at the heart of Project Prometheus. The AI tools that Bezos’s venture deploys to transform industrial companies are, in many ways, the same tools—or close cousins of them—that created the disruption those companies are struggling with in the first place.

Prometheus, in Greek mythology, stole fire from the gods and gave it to humanity. Bezos, characteristically, appears to be doing something slightly different: acquiring the humans already scorched by the fire, and teaching them—for equity—to wield it themselves.

Whether this is industrial philanthropy, ruthless capitalism, or some complex admixture of both is a question the market will take years to answer. What is already clear is that the venture reflects a bet of staggering confidence: that AI’s disruption of physical industry is not a temporary dislocation but a permanent structural shift, and that the companies best positioned to profit from that shift are those willing to own both the AI and the industry it is transforming.

Key Takeaways at a Glance

  • Project Prometheus raised $6.2 billion in late 2025 at a $30 billion valuation, making it one of the largest AI startup raises in history.
  • The startup is co-led by Jeff Bezos and Vikram Bajaj and has recruited aggressively from OpenAI, Meta, and DeepMind.
  • It targets AI-disrupted companies in manufacturing, aerospace, computers, and automobiles for acquisition and transformation.
  • Current capital raise talks involve Abu Dhabi sovereign wealth funds and JPMorgan, potentially mobilizing tens of billions in acquisition firepower.
  • The venture’s acquisition of General Agents signals intent to deploy autonomous AI systems in physical industrial environments.
  • Broader economic implications span global supply chains, labor market displacement, and emerging antitrust concerns.

Looking Ahead: The Industrial AI Revolution Has a Name

The industrial AI revolution has been discussed in academic papers, OECD reports, and McKinsey decks for the better part of a decade. What Project Prometheus represents is something qualitatively different: the moment that revolution acquires capital, management, and strategic intent on a scale commensurate with the challenge.

Whether Bezos succeeds in his bet on the physical economy will tell us something profound about the limits—and possibilities—of AI as an economic transformation engine. If Project Prometheus delivers on its promise, it will reshape global manufacturing supply chains, redefine the competitive landscape of industrial companies, and generate returns that make the Amazon IPO look modest by comparison. If it stumbles, it will offer an equally valuable lesson: that the gap between AI’s laboratory promise and its factory-floor reality is wider than even the most well-capitalized optimists anticipated.

Either way, the industrial world will not look the same on the other side.


Sources & Citations:

  1. The New York Times — Original Project Prometheus Reveal
  2. Financial Times — Project Prometheus Funding & Acquisition Strategy
  3. Wired — General Agents Acquisition Coverage
  4. Yahoo Finance — Project Prometheus $6.2B Funding Round
  5. MIT Work of the Future — AI and Labor Markets
  6. OECD — Global Industrial AI Policy
  7. Wikipedia — Jeff Bezos Background

Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending

Copyright © 2025 The Economy, Inc . All rights reserved .

Discover more from The Economy

Subscribe now to keep reading and get access to the full archive.

Continue reading