Connect with us

AI

What a Chocolate Company Can Tell Us About OpenAI’s Risks: Hershey’s Legacy and the AI Giant’s Charitable Gamble

Published

on

The parallels between Milton Hershey’s century-old trust and OpenAI’s restructuring reveal uncomfortable truths about power, philanthropy, and the future of artificial intelligence governance.

In 2002, the board of the Hershey Trust quietly floated a plan that would have upended a century of carefully constructed philanthropy. They proposed selling the Hershey Company—the chocolate empire—to Wrigley or Nestlé for somewhere north of $12 billion. The proceeds would have theoretically enriched the Milton Hershey School, the boarding school for low-income children that the company’s founder had dedicated his fortune to sustaining. It was, on paper, an act of fiscal prudence. In practice, it was a near-catastrophe—one that Pennsylvania’s attorney general halted amid public outcry, conflict-of-interest investigations, and the uncomfortable revelation that some trust board members had rather too many ties to the acquiring parties.

The deal collapsed. But the architecture that made such a maneuver possible—a charitable trust wielding near-absolute voting control over a publicly traded company, insulated from traditional accountability structures—never changed.

Fast forward two decades, and a strikingly similar structure is taking shape at the frontier of artificial intelligence. OpenAI’s 2025 restructuring into a Public Benefit Corporation, with a newly formed OpenAI Foundation holding approximately 26% of equity in a company now valued at roughly $130 billion, has drawn comparisons from governance scholars, philanthropic historians, and antitrust economists alike. The OpenAI Hershey structure comparison is not merely rhetorical—it is, structurally and legally, one of the most instructive precedents available to anyone trying to understand where this gamble leads.

The Hershey Precedent: A Century of Sweet Success and Bitter Disputes

Milton Hershey was not a villain. He was, by most accounts, a genuinely idealistic industrialist who built a company town in rural Pennsylvania, provided workers with housing, schools, and parks, and then—with no children of his own—donated the bulk of his fortune to a trust that would fund the Milton Hershey School in perpetuity. When he died in 1945, the trust he established owned the majority of Hershey Foods Corporation stock. That arrangement was grandfathered under the 1969 Tax Reform Act, which capped charitable foundation holdings in for-profit companies at 20% for new entities—but allowed existing arrangements to stand.

The result, still operative today: the Hershey Trust controls roughly 80% of Hershey’s voting power while holding approximately $23 billion in assets. It is one of the most concentrated governance arrangements in American corporate history. And it has produced, over the decades, a remarkable catalogue of governance pathologies—self-perpetuating boards, lavish trustee compensation, conflicts of interest, and the periodic temptation to treat a $23 billion asset base as something other than a charitable instrument.

The 2002 sale attempt was the most dramatic episode, but hardly the only one. Pennsylvania’s attorney general has intervened repeatedly. A 2016 investigation found board members had approved millions in questionable real estate transactions. Trustees have cycled in and out amid ethics violations. And yet the fundamental structure—concentrated voting control in a charitable entity, largely exempt from the market discipline that shapes ordinary corporations—persists.

This is the template against which OpenAI’s new architecture deserves to be measured.

OpenAI’s Charitable Gamble: Anatomy of the New Structure

When Sam Altman and the OpenAI board announced the company’s transition to a capped-profit and then Public Benefit Corporation model, they framed it as a solution to a genuine tension: how do you raise the capital required to develop artificial general intelligence—measured in the tens of billions—while maintaining a mission ostensibly oriented toward humanity rather than shareholders?

The answer they arrived at is, structurally, closer to Hershey than to Google. Under the restructured arrangement, the OpenAI Foundation holds approximately 26% equity in OpenAI PBC at the company’s current ~$130 billion valuation—making it, by asset size, larger than the Gates Foundation, which manages roughly $70 billion. Microsoft retains approximately 27% equity. Altman and employees hold the remainder under various compensation and vesting structures.

The Foundation’s stated mandate is to direct resources toward health, education, and AI resilience philanthropy—a mission broad enough to accommodate almost any expenditure. Crucially, as California Attorney General Rob Bonta’s 2025 concessions made clear, the restructuring required commitments around safety and asset protection, but the precise mechanisms for enforcing those commitments remain opaque. Bonta’s office won language requiring that charitable assets not be diverted for commercial benefit—a standard that sounds robust until you consider how difficult it is to operationalize when the “charitable” entity is the commercial enterprise.

The OpenAI charitable risks embedded in this structure are not hypothetical. They are legible from history.

The Governance Gap: Where Philanthropy Ends and Power Begins

FeatureHershey TrustOpenAI Foundation
Equity stake~80% voting control~26% equity (~$34B)
Total assets~$23B~$34B (at current valuation)
Regulatory exemption1969 Tax Reform Act grandfatheredCalifornia AG concessions (2025)
Oversight bodyPennsylvania AGCalifornia AG + FTC (emerging)
Primary beneficiaryMilton Hershey SchoolHealth, education, AI resilience
Board independenceRecurring conflicts of interestOverlapping board memberships
Market accountabilityPartial (listed company)Limited (PBC structure)

The comparison table above reveals a foundational asymmetry. Hershey, for all its governance problems, operates within a framework where the underlying company is publicly listed, analysts scrutinize quarterly earnings, and the attorney general of Pennsylvania has decades of institutional practice monitoring the trust. OpenAI is a private company. Its Foundation’s equity is illiquid. Its valuation is determined by private funding rounds, not public markets. And the regulatory apparatus designed to oversee it is, bluntly, improvising.

Critics have been vocal. The Midas Project, a nonprofit focused on AI accountability, has argued that the AI governance nonprofit model OpenAI has constructed creates precisely the conditions for what they term “mission drift under incentive pressure”—a dynamic where the commercial imperatives of a $130 billion company gradually subordinate the charitable mandate of its controlling foundation. This is not speculation; it is the documented history of every large charitable trust that has ever governed a commercially valuable enterprise.

Bret Taylor, OpenAI’s board chair, has offered the counter-argument: that the Foundation structure provides a durable check against pure profit maximization, creating legally enforceable obligations that a traditional corporation could simply disclaim. In an era where AI companies face pressure to ship products faster than safety research can validate them, Taylor argues, structural constraints matter.

Both positions contain truth. The question is which force—structural obligation or commercial gravity—proves stronger over the decade ahead.

Economic Modeling the Downside: The $250 Billion Question

What does it actually cost if the charitable mission is subordinated to commercial interests? The figure is not immaterial.

The OpenAI foundation equity stake, at current valuation, represents approximately $34 billion in charitable assets. If OpenAI achieves the kind of transformative commercial success its investors are pricing in—scenarios in which AGI-adjacent systems generate trillions in economic value—the Foundation’s stake could appreciate dramatically. Some economists modeling AI’s macroeconomic impact have suggested transformative AI could contribute $15-25 trillion to global GDP by 2035. Even a modest fraction of that value flowing through a properly governed charitable structure would represent an unprecedented philanthropic resource.

But the Hershey precedent suggests the gap between potential and realized charitable value can be enormous. Scholars at HistPhil.org, who have tracked the OpenAI Hershey structure comparison in detail, estimate that governance failures at large charitable trusts have historically diverted between 15-40% of potential charitable value toward administrative costs, trustee enrichment, and mission-misaligned expenditure. Applied to OpenAI’s trajectory, that range implies a potential public value loss exceeding $250 billion over a 20-year horizon—larger than the annual GDP of many mid-sized economies.

This is why the regulatory dimension matters so profoundly.

The Regulatory Frontier: U.S. vs. EU Approaches to AI Charity

American nonprofit law was not designed for entities like OpenAI. The legal scaffolding governing charitable trusts—built incrementally from the 1969 Tax Reform Act through various state attorney general statutes—assumes a relatively stable enterprise with predictable revenue streams and defined charitable outputs. OpenAI is none of these things. It operates at the intersection of defense contracting, consumer software, and scientific research, in a market where the underlying technology is evolving faster than any regulatory framework can track.

The European Union’s approach, by contrast, builds AI governance into product and deployment regulation rather than entity structure. The EU AI Act, fully operative by 2026, imposes obligations on AI systems regardless of the corporate form of their developers. A Public Benefit Corporation operating in Europe faces the same high-risk AI obligations as a shareholder-maximizing competitor. This structural neutrality has advantages: it prevents regulatory arbitrage where companies adopt charitable structures primarily to access regulatory goodwill.

The divergence creates a genuine cross-border governance problem. A company structured to satisfy California’s attorney general may simultaneously face EU compliance requirements that presuppose entirely different accountability mechanisms. For international researchers tracking AI philanthropy challenges and AGI public interest governance, this regulatory patchwork is arguably the most consequential design problem of the next decade.

What History’s Verdict on Hershey Actually Says

It would be unfair—and inaccurate—to characterize the Hershey Trust as a failure. The Milton Hershey School today serves approximately 2,200 students annually, providing free education, housing, and healthcare to children from low-income families. That outcome is real, durable, and directly attributable to the trust structure Milton Hershey designed. The governance pathologies that have periodically afflicted the trust have not, ultimately, destroyed its mission.

But this is precisely the danger of using Hershey as a template for optimism. The trust survived its governance crises because Pennsylvania’s attorney general had clear jurisdictional authority, because the Hershey Company’s public listing created external accountability, and because the charitable mission was concrete enough to defend in court. Educating low-income children is an unambiguous charitable purpose. “Ensuring that artificial general intelligence benefits all of humanity” is not.

The vagueness of OpenAI’s charitable mandate is a feature to its architects—it provides flexibility to pursue the company’s evolving commercial and research agenda under a philanthropic umbrella. To governance scholars, it is a vulnerability. Vague mandates are harder to enforce, easier to reinterpret, and more susceptible to capture by the very commercial interests they nominally constrain. As Vox’s analysis of the nonprofit-to-PBC transition noted, the devil is almost always in the enforcement mechanism, not the stated mission.

The Forward View: What Investors and Policymakers Must Demand

The public benefit corporation risks embedded in OpenAI’s structure are not an argument against the structure’s existence. They are an argument for the kind of rigorous, institutionalized oversight that the structure currently lacks.

What would adequate governance look like? At minimum, it would require independent audit of the Foundation’s charitable expenditures by bodies with no commercial relationship to OpenAI. It would require clear, justiciable standards for what constitutes mission-aligned versus mission-diverting Foundation activity. It would require mandatory disclosure of board member relationships—commercial, financial, and social—with OpenAI PBC. And it would require international coordination between U.S. state attorneys general and EU regulatory bodies to prevent jurisdictional arbitrage.

None of these mechanisms currently exist in robust form. The California AG’s 2025 concessions are a beginning, not an architecture.

For AI investors, the governance question is increasingly a financial one. Companies operating under poorly structured philanthropic control have historically underperformed market expectations when governance conflicts surface—as Hershey’s periodic crises have demonstrated. For policymakers in Washington, Brussels, and beyond, the OpenAI model represents either a template for responsible AI development or a cautionary tale in the making. Which it becomes depends almost entirely on decisions made in the next three to five years, before the company’s commercial scale makes course correction prohibitively difficult.

Milton Hershey built something remarkable and something flawed in the same gesture. A century later, those flaws are still being litigated. The architects of OpenAI’s charitable gamble would do well to study that inheritance—not for reassurance, but for warning.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Analysis

Bezos’s Project Prometheus Nears $38 Billion Valuation: The Real AI Race Is Just Beginning

Published

on

A $10 billion funding round—his first operational role since Amazon—signals a shift from digital chatbots to the physical world. But as AI funding hits $242 billion in a single quarter, is the real bubble in our power grid?

Introduction

In Greek mythology, Prometheus stole fire from the gods and gave it to humanity. Today, Jeff Bezos is attempting a similar act of technological transference—not with a fennel stalk, but with a $10 billion checkbook.

According to a report first published by the Financial Times, Bezos’s secretive AI lab, code-named Project Prometheus, is on the verge of closing a massive funding round that values the startup at roughly $38 billion. The round, which includes heavyweights like JPMorgan and BlackRock, is reportedly being upsized due to “strong investor demand”.

This isn’t just another tech funding story. It marks Bezos’s first operational role since stepping down as Amazon CEO in 2021—and it is a deliberate, high-stakes bet that the next trillion-dollar opportunity in artificial intelligence lies not in writing better poetry or generating fake images, but in bending the physical laws of manufacturing, aerospace, and construction to our will.

The $38 Billion Bet on the Real World

For the last two years, the AI narrative has been dominated by large language models (LLMs) and the battle between OpenAI, Google DeepMind, and Anthropic. These models excel in the digital ether. Project Prometheus, by contrast, is targeting “physical AI”—systems designed to understand the laws of physics and revolutionize industries where atoms, not just bits, matter.

Co-founded with scientist Vik Bajaj (formerly of Google X), the venture is focused on applications in engineering, aerospace, semiconductors, and even drug discovery. Imagine an AI that can simulate the airflow over a new jet wing, predict material fatigue in a bridge, or optimize a factory floor in real-time—all without the costly, time-consuming cycle of physical prototyping. As Pete Schlampp, CEO of Luminary, recently noted, “AI is changing that by allowing” faster, cheaper digital testing.

The $38 billion valuation is staggering for an early-stage company, but it pales in comparison to the capital being mobilized around it. Bezos is reportedly also raising a separate $100 billion fund to acquire manufacturing companies outright and infuse them with Prometheus’s technology—a strategy that effectively creates a captive market for his lab’s innovations.

A Deluge of Dollars, A Scarcity of Power

To understand the significance of Bezos’s move, one must look at the broader macroeconomic context: the AI funding boom has reached a fever pitch. In the first quarter of 2026 alone, AI companies vacuumed up $242 billion in venture capital, accounting for a staggering 80% of all global startup investment during that period.

This is not just a trend; it is a financial singularity. The AI sector raised more money in three months than it did in all of 2025 combined. This capital influx is concentrated among a few “super rounds”: OpenAI raised $122 billion, Anthropic secured $30 billion, and xAI closed $20 billion.

However, the macro story reveals a critical vulnerability that makes Bezos’s physical AI pivot particularly shrewd. While money is abundant, physical infrastructure is not. A recent Bloomberg report found that roughly half of the AI data centers planned for 2026 in the U.S. have been delayed or canceled. The bottlenecks are not software glitches but tangible hardware: transformer shortages, grid strain, and supply chain paralysis. Only about one-third of the projected 12 GW of new computing capacity is actually under active construction.

The Competitive Chessboard: Why Bezos Is Building His Own Fire

Bezos’s move with Project Prometheus also needs to be read in the context of Amazon’s complex AI allegiances. The e-commerce giant is deeply entwined with Anthropic, having recently committed up to $25 billion in new investment into the Claude maker—a deal that reportedly values Anthropic at up to $3.8 trillion in private markets. Meanwhile, Amazon has also pledged $500 billion to OpenAI for a joint venture focused on stateful AI systems.

In this environment, relying solely on external partners—even those you’ve heavily funded—is a strategic risk. Prometheus gives Bezos a proprietary, in-house engine for the industrial revolution he envisions. It is a classic Bezos move: vertical integration via massive capital expenditure. The lab has already begun “snapping up office space in San Francisco” and “luring away top talent from OpenAI and Google DeepMind”. If you can’t buy the future, you build it yourself.

The Human Cost and the Political Backlash

The fire of Prometheus has always come with a warning. Bezos’s parallel $100 billion plan to acquire and automate factories—replacing human workers with AI-driven robots—has already drawn political fire. The narrative that AI will create more jobs than it destroys is being tested by the sheer scale and speed of this capital deployment.

On the political stage, figures like Senator Bernie Sanders are warning of “AI Oligarchs” planning to spend $300 million on the 2026 midterm elections, while Elon Musk and Andrew Yang debate the necessity of a federal “universal high income” to offset automation-driven job loss. The $38 billion valuation of Project Prometheus is not just a number on a term sheet; it is a geopolitical and socioeconomic fault line.

Conclusion: Fire from the Gods, Grounded in Reality

Bezos’s Project Prometheus nearing a $38 billion valuation is more than a fundraising milestone; it is a directional signal for global capital markets. It confirms that while the first wave of generative AI was about software eating the world, the second wave will be about AI rebuilding the physical world.

For investors, the lesson is clear: the highest returns will not come from funding the next clone of a chatbot but from solving the hardest problems in physics and engineering. For policymakers, the challenge is equally stark: the infrastructure to power this AI future does not exist yet. And for the rest of us, it is a reminder that even as we fret about what AI might do to our jobs, the real bottleneck isn’t the algorithm—it’s the electrical grid.

Bezos is betting $38 billion that he can steal this fire. The question is whether the rest of us are ready to live with the heat.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

AI

Apple’s Next Chief Ternus Faces Defining AI Moment: Tim Cook’s Replacement Must Lead iPhone-Maker Through Industry Shift

Published

on

The tectonic plates of Silicon Valley shifted unequivocally on April 20, 2026. After a historic 15-year tenure that propelled the iPhone maker to an unprecedented $4 trillion valuation, Tim Cook announced he will step down on September 1, transitioning to the role of Executive Chairman. The keys to the kingdom now pass to John Ternus, the 51-year-old hardware engineering savant who has spent a quarter-century architecting the physical foundation of Apple’s most iconic modern devices.

Yet, as the dust settles on this long-anticipated Apple CEO succession plan, a stark reality emerges. Ternus is inheriting a radically different landscape than the one Cook received from Steve Jobs in 2011. Cook was tasked with scaling an undisputed hardware monopoly; Ternus is tasked with defending it against an existential software threat.

As Tim Cook’s replacement, Ternus assumes the mantle at the exact moment the technology sector pivots from the mobile era to the generative artificial intelligence epoch. His success will not be measured by supply chain efficiencies or incremental hardware upgrades, but by his ability to define and execute a winning Apple Intelligence strategy in an increasingly hostile, hyper-competitive market.

The Dawn of the Ternus Era: From Operations Titan to Hardware Visionary

To understand the trajectory of the John Ternus Apple CEO era, one must examine the fundamental differences in leadership DNA between the outgoing and incoming chief executives. Tim Cook is, at his core, an operational genius. His legacy is defined by mastery of global supply chains, geopolitical diplomacy, and the methodical extraction of maximum margin from the iPhone ecosystem.

Ternus, conversely, is an engineer’s engineer. Having overseen the iPad, the AirPods, and the monumental transition of the Mac to Apple Silicon, he deeply understands the intersection of silicon and user experience. Insiders report that Ternus brings a decisively different management style to the C-suite. Where Cook historically preferred a Socratic, hands-off approach to product development—acting as a consensus-builder among top brass—Ternus is known for making swift, definitive product choices.

This decisive edge is precisely what the company requires as it navigates its most pressing vulnerability: its artificial intelligence deficit. A recent Reuters report on Apple’s corporate governance and succession highlights that Ternus’s mandate is to aggressively reinvent the product lineup to meet modern consumer expectations. However, being a hardware visionary is no longer sufficient. The modern device is merely an empty vessel without a pervasive, context-aware intelligence layer running beneath the glass.

The Intelligence Deficit: Combating the Decline in Apple AI Market Share

Apple’s entry into the artificial intelligence arms race has been characterized by uncharacteristic hesitation and strategic missteps. While Microsoft, Google, and Meta sprinted ahead with large language models (LLMs) and advanced neural architectures, Apple opted for a walled-garden, on-device approach that has struggled to keep pace with cloud-based capabilities.

The Apple AI market share currently lags behind its chief rivals, largely due to a fragmented rollout and technological bottlenecks. The initial deployment of Apple Intelligence was marred by delayed features and an overly cautious integration of third-party tools. Most notably, in late March 2026, a botched, accidental rollout of Apple Intelligence in China—a market where Apple lacks the requisite regulatory approvals and relies heavily on local partners to bypass restrictions—highlighted the immense logistical hurdles the company faces.

As highlighted by Bloomberg’s recent analysis on Apple’s AI deployments, Apple’s decision to integrate Google’s Gemini model to power a revamped Siri underscores a painful truth: the company cannot win the AI war in isolation. Ternus must immediately stabilize these partnerships while simultaneously accelerating Apple’s in-house foundational models. He inherits an AI division that saw the departure of key leadership in late 2025, leaving a strategic vacuum that the new CEO must fill with undeniable urgency.

Recalibrating the Apple Intelligence Strategy

The challenge for Ternus is twofold: he must merge his innate understanding of hardware architecture with an aggressive software and cloud strategy. According to a Gartner report on AI adoption and edge computing, the future of enterprise and consumer tech lies in a hybrid model—balancing the privacy and speed of edge computing (processing on the device) with the raw, expansive power of cloud-based LLMs.

Ternus’s immediate priority will be launching iOS 27 and the anticipated overhaul of Siri. It is no longer enough for Siri to be a reactive voice assistant; it must evolve into a proactive, system-wide autonomous agent capable of reasoning, executing complex in-app tasks, and seamlessly analyzing user data without compromising Apple’s rigid privacy standards.

This is where Ternus’s decisive nature will be tested. He must be willing to cannibalize legacy software structures and perhaps even open the iOS ecosystem to deeper third-party AI integrations than Apple is historically comfortable with. The Apple Intelligence strategy must pivot from being a defensive moat to an offensive spear.

The Future of Apple Hardware: AI-First Architecture

Because Ternus is rooted in hardware, his most significant leverage lies in reimagining the physical devices that will house these new AI models. The future of Apple hardware is inextricably linked to the evolution of neural processing units (NPUs).

In tandem with Ternus’s promotion, Apple elevated its silicon architect, Johny Srouji, to Chief Hardware Officer. This alignment is not coincidental. It signals a unified front where hardware and silicon are co-developed exclusively to run massive AI workloads. We can expect future iterations of the iPhone and Mac to feature a radical redesign of thermal management and memory bandwidth, specifically tailored to support on-device inference for generative AI.

Furthermore, Ternus—who reportedly expressed caution regarding the high-risk development of the Vision Pro and the now-cancelled Apple Car—will likely ruthlessly prioritize form factors that deliver immediate AI value. We are likely to see a convergence of wearables and AI, where devices like AirPods and the Apple Watch act as persistent, ambient interfaces for Apple Intelligence, rather than relying solely on the iPhone screen.

Silicon Valley Geopolitics: The Burden of the $4 Trillion Crown

Beyond the silicon and software, Ternus faces a daunting geopolitical landscape. Tim Cook was a master statesman, successfully navigating the treacherous waters of the US-China trade wars, negotiating with consecutive presidential administrations, and maintaining a fragile equilibrium with international regulators. As The Wall Street Journal’s ongoing coverage of tech monopolies points out, global regulatory bodies are increasingly hostile toward Big Tech’s walled gardens.

With Cook serving as Executive Chairman and managing international policy, Ternus has a temporary shield. However, the ultimate responsibility for antitrust compliance, App Store regulations, and navigating the complex AI compliance laws of the European Union and China will soon rest entirely on his shoulders.

Conclusion: The Decisive Leadership Required for Apple’s Next Decade

As September 1 approaches, the global markets are watching with bated breath. John Ternus is not stepping into a role that requires a steady hand to maintain the status quo; he is stepping into a crucible that requires a wartime CEO mentality.

The transition from Tim Cook to John Ternus marks the end of Apple’s era of operational perfectionism and the beginning of its most critical existential challenge since the brink of bankruptcy in the late 1990s. To justify its $4 trillion valuation, the future of Apple hardware must become the undisputed premier vessel for consumer artificial intelligence.

Ternus possesses the engineering pedigree, the institutional respect, and the decisive operational mindset required for the job. Now, he must prove he possesses the visionary foresight to lead the iPhone maker through the most disruptive industry shift in a generation. The hardware is set; the intelligence is pending.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

AI

Could AI’s Leading Men Become as Powerful as Ford or Rockefeller? For Now, They Are Still a Long Way Behind.

Published

on

The five men reshaping intelligence — Dario Amodei, Demis Hassabis, Elon Musk, Mark Zuckerberg, and Sam Altman — command wealth, attention, and technological leverage that no previous generation of innovators has enjoyed. Yet the distance between their present dominance and the systemic, civilization-bending grip once exercised by John D. Rockefeller or Henry Ford remains vast — and poorly understood.

Imagine a boardroom meeting in 2035. The agenda is simple: who controls the infrastructure of thought itself? A decade earlier, five men launched what many called the most consequential technological disruption since electricity. By 2026, their companies had collectively captured trillions of dollars in market value, reshaped labor markets across three continents, and triggered geopolitical confrontations from Brussels to Beijing. And yet, if you measure their power by the standards history reserves for its true industrial titans — the men who didn’t just build industries but became them — the five AI leading men of our era still have a very long way to go.

That is not a comfortable argument to make. The numbers alone seem to render it absurd. Elon Musk’s net worth now exceeds $811 billion, a figure that surpasses the GDP of Poland. Musk’s February 2026 all-stock merger of SpaceX and xAI created a combined entity valued at $1.25 trillion — a single transaction larger than the entire U.S. defense budget. OpenAI, now valued at approximately $500 billion, counts some 800 million weekly active users of ChatGPT, a number that would have seemed science fiction five years ago. Anthropic — founded by Dario Amodei and his sister Daniela — reached a valuation of $380 billion in early 2026, while Meta has committed to spending $115 to $135 billion in capital expenditure in 2026 alone, with an astonishing $600 billion pledged toward data centers through 2028.

These are not ordinary fortunes. They are structurally new categories of wealth concentration. And still, the Rockefeller comparison fails — and fails instructively.

What Made a Tycoon a Tycoon: The Three Pillars of Historical Power

To understand why AI tycoons remain a long way behind their Gilded Age predecessors, one must first understand what actually made Rockefeller and Ford so uniquely dangerous to the social order of their time. It was not simply their wealth. Adjusted for GDP, Rockefeller’s peak fortune has been estimated at roughly $400 billion in today’s dollars — comfortably surpassed by Musk. What made Standard Oil a civilizational force was something more specific and more structural: the simultaneous control of physical infrastructure, political capture, and cultural monopoly.

Rockefeller didn’t just refine oil; he controlled approximately 91% of United States oil refining capacity by the mid-1880s through ownership of the pipelines, the railroad rebates, and the pricing mechanisms that every competitor had to use to survive. He didn’t lobby Congress — he owned the conversation. Ford, similarly, didn’t just manufacture cars; he built company towns, set wages for an entire economy, and deployed a private security apparatus — the Ford Service Department — to enforce his will on a captive workforce. Both men bent the physical world to their models in ways that left no exit for competitors, workers, or governments.

That is the three-pillar framework that the AI quintet has not yet replicated: physical infrastructure lock-in, political capture, and cultural monopoly. The gap between aspiration and achievement on each of these dimensions is the real story of power in 2026.

Infrastructure: Who Controls the Pipes?

The most important question in any era of technological transformation is not who builds the smartest machine, but who controls the plumbing. Rockefeller’s genius was not chemistry — it was logistics. He understood that the pipeline was more powerful than the refinery.

In the AI economy, the equivalent of the pipeline is the data center, the chip, and the undersea cable. Here the picture for the quintet is mixed at best. Mark Zuckerberg’s Meta is building on the most ambitious scale — two mega-clusters that dwarf any corporate construction project in a generation — but the silicon in those data centers is manufactured almost entirely by NVIDIA, a company none of the five control. Musk’s SpaceX-xAI merger is the most vertically integrated attempt to replicate Rockefeller’s pipeline logic: orbital data centers fed by Starlink satellites, in theory giving xAI the physical substrate to train and deploy models without dependence on third-party cloud providers. But as of 2026, that vision remains largely prospective. xAI’s Grok competes credibly against ChatGPT and Claude, but it does not yet possess the proprietary infrastructure advantage that would make it structurally inescapable.

Sam Altman, for his part, has no direct equity in OpenAI, earning a nominal salary of roughly $65,000 per year. His influence derives almost entirely from his position at the helm of the world’s most recognizable AI brand — a form of power that is real, but brittle. The moment a better or cheaper model displaces GPT, the institutional moat begins to crack. Rockefeller, by contrast, had no such vulnerability: he owned the pipes regardless of whose oil flowed through them.

Dario Amodei’s Anthropic presents a different case. With a $380 billion valuation, enterprise AI revenues reportedly growing at exponential rates, and a model — Claude — that has captured an estimated 40% of enterprise large language model spending in the United States, Anthropic is the most quietly formidable player in the quintet. Amodei has also demonstrated a rare form of institutional courage: in February 2026, he refused a Pentagon demand to remove contractual prohibitions on Claude’s use for mass domestic surveillance, even as the Trump administration labeled Anthropic a “supply-chain risk” and ordered agencies to stop using the model. That is not the behavior of a man who has captured the state. It is the behavior of a man trying not to be captured by it.

Political Power: Proximity Is Not Capture

The AI leading men have achieved unprecedented proximity to political power. Altman donated to Trump’s inaugural fund, sat on San Francisco’s mayoral transition team, and has testified repeatedly before Congress. Musk, as an architect of the Department of Government Efficiency, has arguably achieved more direct influence over federal bureaucracy than any private citizen since Bernard Baruch. Zuckerberg has reoriented Meta’s content moderation in ways that reflect political calculation as much as principled policy.

And yet proximity is not capture. Rockefeller’s Standard Oil didn’t merely lobby regulators — it effectively set the regulatory agenda in oil-producing states for two decades. The steel and railroad barons didn’t just meet with senators; they funded them in ways that made legislative independence a legal fiction.

Today’s AI executives remain subject to forces their predecessors never faced. The European Union’s AI Act imposes binding constraints that no 19th-century robber baron ever encountered. Antitrust scrutiny from both the Department of Justice and the EU threatens the integration strategies of both Google DeepMind and Meta. Anthropic’s standoff with the Pentagon demonstrates that even the most safety-focused AI lab cannot escape the gravitational pull of geopolitical competition. The five men are powerful political actors — but they are actors on a stage with many more directors than Rockefeller ever faced.

The Cognition Economy: A New Kind of Monopoly Risk

Where the AI quintet is converging toward something genuinely Rockefellerian is in what might be called the cognition economy — the emerging marketplace where intelligence itself, not oil or steel, is the resource being extracted, refined, and sold.

Demis Hassabis, the Nobel Prize–winning CEO of Google DeepMind, said at Davos 2026 that today’s AI systems are “nowhere near” human-level AGI, placing the milestone at “five to ten years” away. Amodei, characteristically more bullish, has predicted that AI will reach “Nobel-level” scientific research capability within two years, and has described the coming AI cluster as “a country of geniuses in a data center” running at superhuman speeds. If either is even partially correct, the downstream consequences for labor markets, knowledge production, and institutional power are more profound than anything the Industrial Revolution generated.

The danger is not that one of these five men will own the world’s intelligence outright. It is that the economic logic of AI — massive upfront compute costs, proprietary training data, and compounding capability advantages — tends toward the same concentration dynamics that produced Standard Oil. A model that is marginally better attracts more users; more users generate more data; more data enables further improvement; the loop closes. This is not metaphor. Meta’s Llama 5, released in April 2026, was explicitly designed to commoditize proprietary AI — Zuckerberg’s theory being that if intelligence becomes free, the company that distributes it through 3.5 billion social media users wins by default. That is not so different from Rockefeller’s insight that the real money was never in the oil itself, but in making yourself indispensable to everyone who wanted to transport it.

Cultural Monopoly: The Unfinished Frontier

Henry Ford didn’t just build cars. He built a culture. The five-dollar day, the $40 workweek — Ford shaped how Americans understood the relationship between labor, leisure, and consumption. His prejudices, published in the Dearborn Independent and later praised by Adolf Hitler, exercised a cultural influence that no modern tech executive has approached, for better or for worse.

The AI quintet has, so far, produced nothing comparable to that kind of cultural ownership. ChatGPT is used by hundreds of millions, but it has not yet redefined the terms of civic life in the way that Ford’s assembly lines redefined time itself. The AI leading men give TED talks and publish essays — Amodei’s “Machines of Loving Grace” and its sequel “The Adolescence of Technology” are genuine intellectual contributions — but they have not yet built the durable cultural institutions that the Carnegies and Fords used to launder their economic power into social legitimacy. The Carnegie libraries are still standing. The Ford Foundation still funds democracy initiatives. What will Sam Altman’s equivalent be? We do not yet know.

This gap may close faster than we expect. If AI agents do begin displacing 50% of white-collar jobs — as Amodei and others predict within five years — the resulting social disruption will demand new cultural narratives. The men who shape those narratives will wield a form of power that makes their current wealth look like a down payment.

Why the Gap Matters — And Why It Is Narrowing

The distance between the AI tycoons of 2026 and the historical robber barons is real, but it is not permanent. Three trends are accelerating the convergence.

First, physical infrastructure is being built at unprecedented speed. Meta’s $600 billion data center pledge, Musk’s orbital computing vision, and the arms-race dynamics of semiconductor procurement are creating the structural lock-in that historically defines industrial monopoly. The company that owns the compute wins — not just the model race, but the infrastructure race.

Second, regulatory arbitrage is becoming a competitive strategy. Just as Rockefeller used the legal patchwork of late-19th-century interstate commerce to outmaneuver state-level regulators, AI companies are exploiting the gap between national regulatory frameworks to deploy capabilities that no single jurisdiction can constrain. The Trump administration’s rollback of Biden-era AI safety executive orders has already opened space for more aggressive deployment by American companies.

Third, the feedback loops of AI capability are compounding in ways that no previous technology has. When Anthropic’s own engineers have largely stopped writing code themselves — directing AI-generated code as product managers rather than authors — the productivity advantages of leading AI labs over their competitors begin to resemble Standard Oil’s pipeline advantages over independent refiners. Not yet identical. But structurally rhyming.

The View from 2035: A Question of Institutions

The most important distinction between Ford, Rockefeller, and today’s AI leading men may ultimately be institutional rather than technological. The Gilded Age tycoons operated in a world with weak antitrust frameworks, no administrative state to speak of, and a political economy that had not yet developed the tools to constrain concentrated private power. The Progressive Era — Teddy Roosevelt’s trust-busting, the Sherman Act, the eventual dissolution of Standard Oil — was the institutional response. It took a generation.

We may be at the beginning of a similar reckoning. Whether the five men who currently lead the AI revolution become as powerful as Ford or Rockefeller depends less on their own ambitions — which are extraordinary — than on the speed and coherence of the institutional response. Policymakers who wait for the infrastructure to be fully built before acting will find themselves in the same position as the regulators who confronted Standard Oil in 1911: arriving at the scene of a revolution already completed.

The AI leading men are not, today, as powerful as Rockefeller. But they are building the conditions under which someone very like them could be. That is the moment for executives, investors, and policymakers to pay attention — not when the resemblance is complete, but now, while the architecture is still under construction and the pipes have not yet been welded shut.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending

Copyright © 2025 The Economy, Inc . All rights reserved .

Discover more from The Economy

Subscribe now to keep reading and get access to the full archive.

Continue reading