AI

What a Chocolate Company Can Tell Us About OpenAI’s Risks: Hershey’s Legacy and the AI Giant’s Charitable Gamble

Published

on

The parallels between Milton Hershey’s century-old trust and OpenAI’s restructuring reveal uncomfortable truths about power, philanthropy, and the future of artificial intelligence governance.

In 2002, the board of the Hershey Trust quietly floated a plan that would have upended a century of carefully constructed philanthropy. They proposed selling the Hershey Company—the chocolate empire—to Wrigley or Nestlé for somewhere north of $12 billion. The proceeds would have theoretically enriched the Milton Hershey School, the boarding school for low-income children that the company’s founder had dedicated his fortune to sustaining. It was, on paper, an act of fiscal prudence. In practice, it was a near-catastrophe—one that Pennsylvania’s attorney general halted amid public outcry, conflict-of-interest investigations, and the uncomfortable revelation that some trust board members had rather too many ties to the acquiring parties.

The deal collapsed. But the architecture that made such a maneuver possible—a charitable trust wielding near-absolute voting control over a publicly traded company, insulated from traditional accountability structures—never changed.

Fast forward two decades, and a strikingly similar structure is taking shape at the frontier of artificial intelligence. OpenAI’s 2025 restructuring into a Public Benefit Corporation, with a newly formed OpenAI Foundation holding approximately 26% of equity in a company now valued at roughly $130 billion, has drawn comparisons from governance scholars, philanthropic historians, and antitrust economists alike. The OpenAI Hershey structure comparison is not merely rhetorical—it is, structurally and legally, one of the most instructive precedents available to anyone trying to understand where this gamble leads.

The Hershey Precedent: A Century of Sweet Success and Bitter Disputes

Milton Hershey was not a villain. He was, by most accounts, a genuinely idealistic industrialist who built a company town in rural Pennsylvania, provided workers with housing, schools, and parks, and then—with no children of his own—donated the bulk of his fortune to a trust that would fund the Milton Hershey School in perpetuity. When he died in 1945, the trust he established owned the majority of Hershey Foods Corporation stock. That arrangement was grandfathered under the 1969 Tax Reform Act, which capped charitable foundation holdings in for-profit companies at 20% for new entities—but allowed existing arrangements to stand.

The result, still operative today: the Hershey Trust controls roughly 80% of Hershey’s voting power while holding approximately $23 billion in assets. It is one of the most concentrated governance arrangements in American corporate history. And it has produced, over the decades, a remarkable catalogue of governance pathologies—self-perpetuating boards, lavish trustee compensation, conflicts of interest, and the periodic temptation to treat a $23 billion asset base as something other than a charitable instrument.

The 2002 sale attempt was the most dramatic episode, but hardly the only one. Pennsylvania’s attorney general has intervened repeatedly. A 2016 investigation found board members had approved millions in questionable real estate transactions. Trustees have cycled in and out amid ethics violations. And yet the fundamental structure—concentrated voting control in a charitable entity, largely exempt from the market discipline that shapes ordinary corporations—persists.

This is the template against which OpenAI’s new architecture deserves to be measured.

OpenAI’s Charitable Gamble: Anatomy of the New Structure

When Sam Altman and the OpenAI board announced the company’s transition to a capped-profit and then Public Benefit Corporation model, they framed it as a solution to a genuine tension: how do you raise the capital required to develop artificial general intelligence—measured in the tens of billions—while maintaining a mission ostensibly oriented toward humanity rather than shareholders?

The answer they arrived at is, structurally, closer to Hershey than to Google. Under the restructured arrangement, the OpenAI Foundation holds approximately 26% equity in OpenAI PBC at the company’s current ~$130 billion valuation—making it, by asset size, larger than the Gates Foundation, which manages roughly $70 billion. Microsoft retains approximately 27% equity. Altman and employees hold the remainder under various compensation and vesting structures.

The Foundation’s stated mandate is to direct resources toward health, education, and AI resilience philanthropy—a mission broad enough to accommodate almost any expenditure. Crucially, as California Attorney General Rob Bonta’s 2025 concessions made clear, the restructuring required commitments around safety and asset protection, but the precise mechanisms for enforcing those commitments remain opaque. Bonta’s office won language requiring that charitable assets not be diverted for commercial benefit—a standard that sounds robust until you consider how difficult it is to operationalize when the “charitable” entity is the commercial enterprise.

The OpenAI charitable risks embedded in this structure are not hypothetical. They are legible from history.

The Governance Gap: Where Philanthropy Ends and Power Begins

FeatureHershey TrustOpenAI Foundation
Equity stake~80% voting control~26% equity (~$34B)
Total assets~$23B~$34B (at current valuation)
Regulatory exemption1969 Tax Reform Act grandfatheredCalifornia AG concessions (2025)
Oversight bodyPennsylvania AGCalifornia AG + FTC (emerging)
Primary beneficiaryMilton Hershey SchoolHealth, education, AI resilience
Board independenceRecurring conflicts of interestOverlapping board memberships
Market accountabilityPartial (listed company)Limited (PBC structure)

The comparison table above reveals a foundational asymmetry. Hershey, for all its governance problems, operates within a framework where the underlying company is publicly listed, analysts scrutinize quarterly earnings, and the attorney general of Pennsylvania has decades of institutional practice monitoring the trust. OpenAI is a private company. Its Foundation’s equity is illiquid. Its valuation is determined by private funding rounds, not public markets. And the regulatory apparatus designed to oversee it is, bluntly, improvising.

Critics have been vocal. The Midas Project, a nonprofit focused on AI accountability, has argued that the AI governance nonprofit model OpenAI has constructed creates precisely the conditions for what they term “mission drift under incentive pressure”—a dynamic where the commercial imperatives of a $130 billion company gradually subordinate the charitable mandate of its controlling foundation. This is not speculation; it is the documented history of every large charitable trust that has ever governed a commercially valuable enterprise.

Bret Taylor, OpenAI’s board chair, has offered the counter-argument: that the Foundation structure provides a durable check against pure profit maximization, creating legally enforceable obligations that a traditional corporation could simply disclaim. In an era where AI companies face pressure to ship products faster than safety research can validate them, Taylor argues, structural constraints matter.

Both positions contain truth. The question is which force—structural obligation or commercial gravity—proves stronger over the decade ahead.

Economic Modeling the Downside: The $250 Billion Question

What does it actually cost if the charitable mission is subordinated to commercial interests? The figure is not immaterial.

The OpenAI foundation equity stake, at current valuation, represents approximately $34 billion in charitable assets. If OpenAI achieves the kind of transformative commercial success its investors are pricing in—scenarios in which AGI-adjacent systems generate trillions in economic value—the Foundation’s stake could appreciate dramatically. Some economists modeling AI’s macroeconomic impact have suggested transformative AI could contribute $15-25 trillion to global GDP by 2035. Even a modest fraction of that value flowing through a properly governed charitable structure would represent an unprecedented philanthropic resource.

But the Hershey precedent suggests the gap between potential and realized charitable value can be enormous. Scholars at HistPhil.org, who have tracked the OpenAI Hershey structure comparison in detail, estimate that governance failures at large charitable trusts have historically diverted between 15-40% of potential charitable value toward administrative costs, trustee enrichment, and mission-misaligned expenditure. Applied to OpenAI’s trajectory, that range implies a potential public value loss exceeding $250 billion over a 20-year horizon—larger than the annual GDP of many mid-sized economies.

This is why the regulatory dimension matters so profoundly.

The Regulatory Frontier: U.S. vs. EU Approaches to AI Charity

American nonprofit law was not designed for entities like OpenAI. The legal scaffolding governing charitable trusts—built incrementally from the 1969 Tax Reform Act through various state attorney general statutes—assumes a relatively stable enterprise with predictable revenue streams and defined charitable outputs. OpenAI is none of these things. It operates at the intersection of defense contracting, consumer software, and scientific research, in a market where the underlying technology is evolving faster than any regulatory framework can track.

The European Union’s approach, by contrast, builds AI governance into product and deployment regulation rather than entity structure. The EU AI Act, fully operative by 2026, imposes obligations on AI systems regardless of the corporate form of their developers. A Public Benefit Corporation operating in Europe faces the same high-risk AI obligations as a shareholder-maximizing competitor. This structural neutrality has advantages: it prevents regulatory arbitrage where companies adopt charitable structures primarily to access regulatory goodwill.

The divergence creates a genuine cross-border governance problem. A company structured to satisfy California’s attorney general may simultaneously face EU compliance requirements that presuppose entirely different accountability mechanisms. For international researchers tracking AI philanthropy challenges and AGI public interest governance, this regulatory patchwork is arguably the most consequential design problem of the next decade.

What History’s Verdict on Hershey Actually Says

It would be unfair—and inaccurate—to characterize the Hershey Trust as a failure. The Milton Hershey School today serves approximately 2,200 students annually, providing free education, housing, and healthcare to children from low-income families. That outcome is real, durable, and directly attributable to the trust structure Milton Hershey designed. The governance pathologies that have periodically afflicted the trust have not, ultimately, destroyed its mission.

But this is precisely the danger of using Hershey as a template for optimism. The trust survived its governance crises because Pennsylvania’s attorney general had clear jurisdictional authority, because the Hershey Company’s public listing created external accountability, and because the charitable mission was concrete enough to defend in court. Educating low-income children is an unambiguous charitable purpose. “Ensuring that artificial general intelligence benefits all of humanity” is not.

The vagueness of OpenAI’s charitable mandate is a feature to its architects—it provides flexibility to pursue the company’s evolving commercial and research agenda under a philanthropic umbrella. To governance scholars, it is a vulnerability. Vague mandates are harder to enforce, easier to reinterpret, and more susceptible to capture by the very commercial interests they nominally constrain. As Vox’s analysis of the nonprofit-to-PBC transition noted, the devil is almost always in the enforcement mechanism, not the stated mission.

The Forward View: What Investors and Policymakers Must Demand

The public benefit corporation risks embedded in OpenAI’s structure are not an argument against the structure’s existence. They are an argument for the kind of rigorous, institutionalized oversight that the structure currently lacks.

What would adequate governance look like? At minimum, it would require independent audit of the Foundation’s charitable expenditures by bodies with no commercial relationship to OpenAI. It would require clear, justiciable standards for what constitutes mission-aligned versus mission-diverting Foundation activity. It would require mandatory disclosure of board member relationships—commercial, financial, and social—with OpenAI PBC. And it would require international coordination between U.S. state attorneys general and EU regulatory bodies to prevent jurisdictional arbitrage.

None of these mechanisms currently exist in robust form. The California AG’s 2025 concessions are a beginning, not an architecture.

For AI investors, the governance question is increasingly a financial one. Companies operating under poorly structured philanthropic control have historically underperformed market expectations when governance conflicts surface—as Hershey’s periodic crises have demonstrated. For policymakers in Washington, Brussels, and beyond, the OpenAI model represents either a template for responsible AI development or a cautionary tale in the making. Which it becomes depends almost entirely on decisions made in the next three to five years, before the company’s commercial scale makes course correction prohibitively difficult.

Milton Hershey built something remarkable and something flawed in the same gesture. A century later, those flaws are still being litigated. The architects of OpenAI’s charitable gamble would do well to study that inheritance—not for reassurance, but for warning.

Leave a ReplyCancel reply

Trending

Exit mobile version