AI
What a Chocolate Company Can Tell Us About OpenAI’s Risks: Hershey’s Legacy and the AI Giant’s Charitable Gamble
The parallels between Milton Hershey’s century-old trust and OpenAI’s restructuring reveal uncomfortable truths about power, philanthropy, and the future of artificial intelligence governance.
In 2002, the board of the Hershey Trust quietly floated a plan that would have upended a century of carefully constructed philanthropy. They proposed selling the Hershey Company—the chocolate empire—to Wrigley or Nestlé for somewhere north of $12 billion. The proceeds would have theoretically enriched the Milton Hershey School, the boarding school for low-income children that the company’s founder had dedicated his fortune to sustaining. It was, on paper, an act of fiscal prudence. In practice, it was a near-catastrophe—one that Pennsylvania’s attorney general halted amid public outcry, conflict-of-interest investigations, and the uncomfortable revelation that some trust board members had rather too many ties to the acquiring parties.
The deal collapsed. But the architecture that made such a maneuver possible—a charitable trust wielding near-absolute voting control over a publicly traded company, insulated from traditional accountability structures—never changed.
Fast forward two decades, and a strikingly similar structure is taking shape at the frontier of artificial intelligence. OpenAI’s 2025 restructuring into a Public Benefit Corporation, with a newly formed OpenAI Foundation holding approximately 26% of equity in a company now valued at roughly $130 billion, has drawn comparisons from governance scholars, philanthropic historians, and antitrust economists alike. The OpenAI Hershey structure comparison is not merely rhetorical—it is, structurally and legally, one of the most instructive precedents available to anyone trying to understand where this gamble leads.
The Hershey Precedent: A Century of Sweet Success and Bitter Disputes
Milton Hershey was not a villain. He was, by most accounts, a genuinely idealistic industrialist who built a company town in rural Pennsylvania, provided workers with housing, schools, and parks, and then—with no children of his own—donated the bulk of his fortune to a trust that would fund the Milton Hershey School in perpetuity. When he died in 1945, the trust he established owned the majority of Hershey Foods Corporation stock. That arrangement was grandfathered under the 1969 Tax Reform Act, which capped charitable foundation holdings in for-profit companies at 20% for new entities—but allowed existing arrangements to stand.
The result, still operative today: the Hershey Trust controls roughly 80% of Hershey’s voting power while holding approximately $23 billion in assets. It is one of the most concentrated governance arrangements in American corporate history. And it has produced, over the decades, a remarkable catalogue of governance pathologies—self-perpetuating boards, lavish trustee compensation, conflicts of interest, and the periodic temptation to treat a $23 billion asset base as something other than a charitable instrument.
The 2002 sale attempt was the most dramatic episode, but hardly the only one. Pennsylvania’s attorney general has intervened repeatedly. A 2016 investigation found board members had approved millions in questionable real estate transactions. Trustees have cycled in and out amid ethics violations. And yet the fundamental structure—concentrated voting control in a charitable entity, largely exempt from the market discipline that shapes ordinary corporations—persists.
This is the template against which OpenAI’s new architecture deserves to be measured.
OpenAI’s Charitable Gamble: Anatomy of the New Structure
When Sam Altman and the OpenAI board announced the company’s transition to a capped-profit and then Public Benefit Corporation model, they framed it as a solution to a genuine tension: how do you raise the capital required to develop artificial general intelligence—measured in the tens of billions—while maintaining a mission ostensibly oriented toward humanity rather than shareholders?
The answer they arrived at is, structurally, closer to Hershey than to Google. Under the restructured arrangement, the OpenAI Foundation holds approximately 26% equity in OpenAI PBC at the company’s current ~$130 billion valuation—making it, by asset size, larger than the Gates Foundation, which manages roughly $70 billion. Microsoft retains approximately 27% equity. Altman and employees hold the remainder under various compensation and vesting structures.
The Foundation’s stated mandate is to direct resources toward health, education, and AI resilience philanthropy—a mission broad enough to accommodate almost any expenditure. Crucially, as California Attorney General Rob Bonta’s 2025 concessions made clear, the restructuring required commitments around safety and asset protection, but the precise mechanisms for enforcing those commitments remain opaque. Bonta’s office won language requiring that charitable assets not be diverted for commercial benefit—a standard that sounds robust until you consider how difficult it is to operationalize when the “charitable” entity is the commercial enterprise.
The OpenAI charitable risks embedded in this structure are not hypothetical. They are legible from history.
The Governance Gap: Where Philanthropy Ends and Power Begins
| Feature | Hershey Trust | OpenAI Foundation |
|---|---|---|
| Equity stake | ~80% voting control | ~26% equity (~$34B) |
| Total assets | ~$23B | ~$34B (at current valuation) |
| Regulatory exemption | 1969 Tax Reform Act grandfathered | California AG concessions (2025) |
| Oversight body | Pennsylvania AG | California AG + FTC (emerging) |
| Primary beneficiary | Milton Hershey School | Health, education, AI resilience |
| Board independence | Recurring conflicts of interest | Overlapping board memberships |
| Market accountability | Partial (listed company) | Limited (PBC structure) |
The comparison table above reveals a foundational asymmetry. Hershey, for all its governance problems, operates within a framework where the underlying company is publicly listed, analysts scrutinize quarterly earnings, and the attorney general of Pennsylvania has decades of institutional practice monitoring the trust. OpenAI is a private company. Its Foundation’s equity is illiquid. Its valuation is determined by private funding rounds, not public markets. And the regulatory apparatus designed to oversee it is, bluntly, improvising.
Critics have been vocal. The Midas Project, a nonprofit focused on AI accountability, has argued that the AI governance nonprofit model OpenAI has constructed creates precisely the conditions for what they term “mission drift under incentive pressure”—a dynamic where the commercial imperatives of a $130 billion company gradually subordinate the charitable mandate of its controlling foundation. This is not speculation; it is the documented history of every large charitable trust that has ever governed a commercially valuable enterprise.
Bret Taylor, OpenAI’s board chair, has offered the counter-argument: that the Foundation structure provides a durable check against pure profit maximization, creating legally enforceable obligations that a traditional corporation could simply disclaim. In an era where AI companies face pressure to ship products faster than safety research can validate them, Taylor argues, structural constraints matter.
Both positions contain truth. The question is which force—structural obligation or commercial gravity—proves stronger over the decade ahead.
Economic Modeling the Downside: The $250 Billion Question
What does it actually cost if the charitable mission is subordinated to commercial interests? The figure is not immaterial.
The OpenAI foundation equity stake, at current valuation, represents approximately $34 billion in charitable assets. If OpenAI achieves the kind of transformative commercial success its investors are pricing in—scenarios in which AGI-adjacent systems generate trillions in economic value—the Foundation’s stake could appreciate dramatically. Some economists modeling AI’s macroeconomic impact have suggested transformative AI could contribute $15-25 trillion to global GDP by 2035. Even a modest fraction of that value flowing through a properly governed charitable structure would represent an unprecedented philanthropic resource.
But the Hershey precedent suggests the gap between potential and realized charitable value can be enormous. Scholars at HistPhil.org, who have tracked the OpenAI Hershey structure comparison in detail, estimate that governance failures at large charitable trusts have historically diverted between 15-40% of potential charitable value toward administrative costs, trustee enrichment, and mission-misaligned expenditure. Applied to OpenAI’s trajectory, that range implies a potential public value loss exceeding $250 billion over a 20-year horizon—larger than the annual GDP of many mid-sized economies.
This is why the regulatory dimension matters so profoundly.
The Regulatory Frontier: U.S. vs. EU Approaches to AI Charity
American nonprofit law was not designed for entities like OpenAI. The legal scaffolding governing charitable trusts—built incrementally from the 1969 Tax Reform Act through various state attorney general statutes—assumes a relatively stable enterprise with predictable revenue streams and defined charitable outputs. OpenAI is none of these things. It operates at the intersection of defense contracting, consumer software, and scientific research, in a market where the underlying technology is evolving faster than any regulatory framework can track.
The European Union’s approach, by contrast, builds AI governance into product and deployment regulation rather than entity structure. The EU AI Act, fully operative by 2026, imposes obligations on AI systems regardless of the corporate form of their developers. A Public Benefit Corporation operating in Europe faces the same high-risk AI obligations as a shareholder-maximizing competitor. This structural neutrality has advantages: it prevents regulatory arbitrage where companies adopt charitable structures primarily to access regulatory goodwill.
The divergence creates a genuine cross-border governance problem. A company structured to satisfy California’s attorney general may simultaneously face EU compliance requirements that presuppose entirely different accountability mechanisms. For international researchers tracking AI philanthropy challenges and AGI public interest governance, this regulatory patchwork is arguably the most consequential design problem of the next decade.
What History’s Verdict on Hershey Actually Says
It would be unfair—and inaccurate—to characterize the Hershey Trust as a failure. The Milton Hershey School today serves approximately 2,200 students annually, providing free education, housing, and healthcare to children from low-income families. That outcome is real, durable, and directly attributable to the trust structure Milton Hershey designed. The governance pathologies that have periodically afflicted the trust have not, ultimately, destroyed its mission.
But this is precisely the danger of using Hershey as a template for optimism. The trust survived its governance crises because Pennsylvania’s attorney general had clear jurisdictional authority, because the Hershey Company’s public listing created external accountability, and because the charitable mission was concrete enough to defend in court. Educating low-income children is an unambiguous charitable purpose. “Ensuring that artificial general intelligence benefits all of humanity” is not.
The vagueness of OpenAI’s charitable mandate is a feature to its architects—it provides flexibility to pursue the company’s evolving commercial and research agenda under a philanthropic umbrella. To governance scholars, it is a vulnerability. Vague mandates are harder to enforce, easier to reinterpret, and more susceptible to capture by the very commercial interests they nominally constrain. As Vox’s analysis of the nonprofit-to-PBC transition noted, the devil is almost always in the enforcement mechanism, not the stated mission.
The Forward View: What Investors and Policymakers Must Demand
The public benefit corporation risks embedded in OpenAI’s structure are not an argument against the structure’s existence. They are an argument for the kind of rigorous, institutionalized oversight that the structure currently lacks.
What would adequate governance look like? At minimum, it would require independent audit of the Foundation’s charitable expenditures by bodies with no commercial relationship to OpenAI. It would require clear, justiciable standards for what constitutes mission-aligned versus mission-diverting Foundation activity. It would require mandatory disclosure of board member relationships—commercial, financial, and social—with OpenAI PBC. And it would require international coordination between U.S. state attorneys general and EU regulatory bodies to prevent jurisdictional arbitrage.
None of these mechanisms currently exist in robust form. The California AG’s 2025 concessions are a beginning, not an architecture.
For AI investors, the governance question is increasingly a financial one. Companies operating under poorly structured philanthropic control have historically underperformed market expectations when governance conflicts surface—as Hershey’s periodic crises have demonstrated. For policymakers in Washington, Brussels, and beyond, the OpenAI model represents either a template for responsible AI development or a cautionary tale in the making. Which it becomes depends almost entirely on decisions made in the next three to five years, before the company’s commercial scale makes course correction prohibitively difficult.
Milton Hershey built something remarkable and something flawed in the same gesture. A century later, those flaws are still being litigated. The architects of OpenAI’s charitable gamble would do well to study that inheritance—not for reassurance, but for warning.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
Analysis
Wall Street Is Betting Against Private Credit — and That Should Worry Everyone
When the architects of the private credit boom begin selling instruments that profit from its distress, the market has entered a new and more dangerous phase.
There is an old rule of thumb in credit markets: the moment the banks that helped build a structure start quietly pricing in its failure, it is time to pay very close attention. That moment arrived on April 13, 2026, when the S&P CDX Financials Index — ticker FINDX — began trading, giving Wall Street its first standardised credit-default swap benchmark explicitly linked to the private credit market. JPMorgan Chase, Bank of America, Barclays, Deutsche Bank, Goldman Sachs, and Morgan Stanley are all distributing the product. These are not peripheral players hedging tail risks. These are the same institutions that have spent a decade co-investing in, lending to, and marketing the very asset class they now offer clients a streamlined mechanism to short.
That is the headline. The deeper story is more unsettling.
The Product Nobody Was Supposed to Need
Credit-default swaps are, at their most basic, financial insurance contracts — the buyer pays a premium; the seller compensates the buyer if a specified borrower defaults. They became infamous in 2008, when an entire shadow banking system imploded partly because CDS had been written so liberally, by parties with no direct exposure to the underlying risk, that protection was illusory rather than real. What is remarkable about the CDX Financials launch is not the instrument itself but what its very existence confesses: private credit has grown so large, so interconnected, and now so stressed that the market has concluded it needs — finally — a public, liquid, standardised mechanism to hedge against its unravelling.
According to S&P Dow Jones Indices, the new FINDX comprises 25 North American financial entities, including banks, insurers, real estate investment trusts, and business development companies (BDCs). Approximately 12% of the equally weighted index is tied to private credit fund managers — specifically Apollo Global Management, Ares Management, and Blackstone. The index rises in value as credit sentiment toward its constituent entities deteriorates. In practical terms: buy protection on FINDX, and you profit when the private credit ecosystem comes under pressure.
Nicholas Godec, head of fixed income tradables and commodities at S&P Dow Jones Indices, described the launch as “the first instance of CDS linked to BDCs, thereby providing CDS linked to the private credit market.” That phrasing — careful, bureaucratic, almost bloodless — belies the signal embedded in the timing.
The Numbers Behind the Anxiety
To understand why this product exists, you need to understand the scale and velocity of the stress currently moving through private credit. The numbers, as of Q1 2026, are striking.
The Financial Times reported that U.S. private credit fund investors submitted a total of $20.8 billion in redemption requests in the first quarter alone — roughly 7% of the approximately $300 billion in assets held by the relevant non-traded BDC vehicles. This is not a trickle. Carlyle’s flagship Tactical Private Credit Fund (CTAC) received redemption requests equivalent to 15.7% of its assets in Q1, more than three times its 5% quarterly limit. Carlyle, like many of its peers, honoured only the cap and deferred the rest. Blue Owl’s Credit Income Corp saw shareholders request withdrawals equivalent to 21.9% of its shares in the three months to March 31 — an extraordinary figure that prompted Moody’s to revise its outlook on the fund from stable to negative. Blue Owl, Blackstone, KKR, Apollo, and Ares have all faced redemption queues this cycle.
Moody’s has since downgraded its outlook on the entire U.S. BDC sector from “stable” to “negative” — a formal acknowledgement that what was once a bull-market darling is now contending with structural liquidity stresses that its semi-liquid product architecture was never fully designed to survive.
Meanwhile, the credit quality of the underlying loans is deteriorating in ways that the sector’s historical marketing materials simply did not anticipate. UBS strategists have projected that private credit default rates could rise by as much as 3 percentage points in 2026, far outpacing the expected 1-percentage-point rise in leveraged loans and high-yield bonds. Morgan Stanley has warned that direct lending default rates could surge as high as 8%, compared with a historical average of 2–2.5%. Payment-in-kind loans — where borrowers pay interest in additional debt rather than cash — are rising, a classic signal of borrowers under duress who are conserving liquidity at the expense of lender economics.
Perhaps most damning: in late 2025, BlackRock’s TCP Capital Corp reported that writedowns on certain portfolio loans reduced its net asset value by 19% in a single quarter.
The AI Dislocation: A Crisis Within the Crisis
No serious analysis of this stress cycle can ignore the role of artificial intelligence in accelerating it. Roughly 20% of BDC portfolio exposure, according to Jefferies research, is concentrated in software businesses — predominantly SaaS companies that private credit firms financed at generous valuations during the zero-interest-rate boom years. The rapid advance of AI tools capable of automating software workflows has sparked a brutal re-evaluation of those companies’ competitive moats, revenue durability, and, ultimately, their debt-service capacity.
Blue Owl, one of the largest direct lenders to the tech-software sector, has faced redemption requests that are — in the words of its own investor communications — reflective of “heightened negative sentiment towards direct lending” driven in part by AI-sector uncertainty. The irony is profound: private credit funds that rushed to finance the digital economy are now discovering that the same technological disruption they helped capitalise is undermining the creditworthiness of their borrowers.
This is not a transient sentiment shock. According to Man Group’s private credit team, private credit loans are originated with the “express purpose of being held to maturity.” That structural illiquidity — the attribute that was once marketed as a yield premium — is now the attribute that makes the sector’s stress harder to contain. When your borrowers are software companies facing existential competitive threats and your investors are retail wealth clients who were sold on liquidity promises, the collision produces exactly what we are now observing: gating, deferred redemptions, and a derivatives market emerging to price what the underlying funds cannot.
What Wall Street Is Really Saying
The CDX Financials launch is not merely a new product. It is a confession.
When the Wall Street Journal first reported the index’s development, analysts initially framed it as a neutral hedging tool — a risk management mechanism that sophisticated market participants had long wanted access to. And in the narrow technical sense, that framing is accurate. Hedge funds with concentrated exposure to BDC equity positions, pension funds with indirect private credit allocations, and banks with syndicated loan books have legitimate demand for an instrument that allows them to offset their exposure.
But consider the posture this represents. JPMorgan, Goldman Sachs, Morgan Stanley, and Barclays built, distributed, and marketed private credit products to institutional and retail clients throughout the 2015–2024 expansion. They collected billions in fees doing so. They celebrated the asset class’s growth — the private credit market has expanded to more than $3 trillion in AUM — as evidence of financial innovation serving real-economy borrowers who couldn’t access public markets. Those same institutions have now co-created a benchmark instrument whose primary utility is to profit, or hedge risk, when that market contracts.
This is not cynicism — it is rational risk management. But it is also a market signal of extraordinary clarity: the largest, best-informed participants in global credit markets have concluded that the probability-weighted downside in private credit is now large enough to justify the cost and complexity of derivative infrastructure. You do not build a CDX index for a market in good health.
Regulatory Fault Lines and the Retail Investor Problem
Perhaps the most underappreciated dimension of this crisis is distributional. Private credit’s expansion over the last decade was partly funded by a deliberate push by asset managers into the wealth management channel — retail and high-net-worth investors who were attracted by the yield premium over public credit and the low apparent volatility of funds that mark their assets infrequently and to model rather than to market.
That low apparent volatility, as analysts at Robert A. Stanger & Co. have pointed out, was partly a function of the valuation methodology rather than the underlying risk. BDCs in the non-listed space can appear stable in their net asset values right up until the moment they are not — and the quarterly redemption gates now being enforced create a first-mover advantage for those who recognise the stress earliest. Institutional investors — the “small but wealthy group” who have been demanding exits — have done exactly that. Retail investors, who typically receive quarterly statements and rely on fund managers’ own assessments of value, are disproportionately likely to be last out.
The Securities and Exchange Commission has been examining BDC valuation practices and the structural question of whether semi-liquid products are appropriately matched to the liquidity expectations of retail investors. The CDX Financials launch materially increases the regulatory pressure surface. It is considerably harder to argue that private credit is a stable, low-volatility asset class suitable for retail distribution when the major banks are simultaneously selling derivatives that facilitate bearish bets on its constitutent managers.
The regulatory trajectory points toward tighter disclosure requirements on BDC valuation methodologies, stricter rules on redemption queue transparency, and potentially new suitability standards for the sale of semi-liquid alternatives to retail investors. None of these changes will arrive in time to protect those already queuing to exit.
The European and EM Dimension
The stress in U.S. private credit has a global undertow that commentary focused on Wall Street mechanics tends to underweight. European direct lenders — many of them subsidiaries or affiliates of the same U.S. managers now under pressure — have similarly expanded into software, healthcare services, and leveraged buyout financing across France, Germany, the Nordics, and the UK. The Bank for International Settlements has flagged the opacity and rapid growth of private credit in advanced economies as a potential systemic risk vector, precisely because the infrequent and model-dependent valuation of these assets makes cross-border contagion difficult to detect in real time.
Emerging market economies face a different but related challenge. Domestic sovereign and corporate borrowers who were priced out of traditional bank lending and public bond markets during periods of dollar strength and risk-off sentiment found private credit as an alternative source of capital. As U.S. private credit funds come under redemption pressure and face potential portfolio de-risking, the marginal withdrawal of credit availability to EM borrowers represents a secondary shock that will not appear in U.S. financial statistics but will very much appear in the economic data of the borrowing countries.
The CDX Financials, for now, is a North American product focused on North American entities. But if the private credit stress deepens, the transmission mechanism to European and EM markets will operate through the same channel it always does: abrupt, disorderly credit withdrawal by institutions that had presented themselves to borrowers as patient, relationship-oriented capital.
The 2026–2027 Outlook: Three Scenarios
Scenario one: Controlled decompression. The redemption pressure peaks in mid-2026 as Q1 earnings are digested, valuations are reset modestly, and AI sector concerns stabilise. The CDX Financials remains a niche hedging tool with modest trading volumes. Default rates rise but remain below 5%. Fund managers gradually improve their liquidity management frameworks, and the episode is remembered as a stress test that the sector passed — awkwardly, but passed.
Scenario two: Structural repricing. Default rates reach the 6–8% range forecast by Morgan Stanley. Fund managers are forced to sell assets to meet redemptions, creating mark-to-market pressure that triggers further investor withdrawals — a slow-motion version of the bank run dynamic. The CDX Financials becomes a liquid, actively traded instrument as hedge funds build short theses against specific managers. The SEC intervenes with new rules. The retail wealth channel for private credit permanently contracts, and the asset class re-professionalises toward institutional-only distribution.
Scenario three: Systemic cascade. A rapid confluence of AI-driven borrower defaults, leveraged BDC balance sheets, and sudden insurance company mark-to-market requirements — recall that insurers have become significant private credit allocators — creates a feedback loop that overwhelms the quarterly gate mechanisms. This scenario remains tail-risk rather than base case, but it is materially more probable today than it was eighteen months ago, and the CDX Financials market, whatever its current illiquidity, provides the mechanism through which this scenario’s probability will be priced in real time.
The Signal in the Noise
There is a temptation, in moments like this, to reach for the 2008 parallel — the credit-default swaps written on mortgage-backed securities, the opacity, the interconnection, the eventual reckoning. That parallel is not fully appropriate. Private credit, for all its stress, is not leveraged to the degree that pre-crisis structured finance was, and the counterparties on the other side of these loans are corporate borrowers rather than millions of individual homeowners facing income shocks. The system is not on the edge of a cliff.
But the more honest framing is this: private credit grew from approximately $500 billion to more than $3 trillion in a decade, fuelled by zero interest rates, a regulatory environment that pushed lending off bank balance sheets, and an institutional appetite for yield that sometimes outpaced rigour. It attracted retail investors on the promise of bond-like returns with equity-like stability. It financed technology businesses at valuations that assumed a competitive landscape that artificial intelligence is now radically disrupting. And it did all of this in a structure — the non-traded BDC, the evergreen fund — that made liquidity appear more plentiful than it was.
The CDX Financials is what happens when the market runs the numbers on all of that and concludes it wants an exit option. For investors still inside these funds, that signal deserves very careful attention.
Conclusion: What Sophisticated Investors Should Do Now
The launch of private credit derivatives is not, by itself, a crisis. It is a maturation — the belated arrival of price discovery infrastructure into a corner of credit markets that had, until now, avoided the bracing discipline of public market scrutiny. In that sense, the CDX Financials is a healthy development. Transparency, even painful transparency, is preferable to opacity.
But for investors with allocations to non-traded BDCs, evergreen private credit funds, or insurance products with significant private credit exposure, several questions now demand answers that fund managers may be reluctant to provide. What is the true liquidity profile of the underlying loan portfolio? What percentage of the portfolio is in payment-in-kind status? How much of the nominal NAV reflects model-based valuations that have not been stress-tested against the current AI-driven sector disruption? And — most importantly — what is the fund’s plan if redemption requests in Q2 and Q3 2026 do not moderate?
The banks selling CDX Financials protection have already decided how to answer those questions for their own books. Investors would do well to ask the same questions of their own.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
Analysis
Agency in the Age of AI: Why Human Initiative — Not Artificial Agents — Will Define the Next Decade
On February 15, 2026, Sam Altman posted two sentences to X that encapsulated a decade of Silicon Valley ambition in a single breath. OpenAI had acquired OpenClaw, an open-source AI agent framework that could autonomously browse, code, and execute complex multi-step tasks — and its creator, Peter Steinberger, was joining the company to “bring agents to everyone.” The deal was quiet by tech-acquisition standards. No press conference. No billion-dollar number dropped to gasps at a conference. Just a pair of tweets that, read carefully, amount to a civilizational declaration: the age of artificial agents — AI systems that act on your behalf, that do rather than merely say — has arrived.
The question no one in those tweets was asking is the one that ought to keep us up at night. Not what will AI agents do for us? But what will they do to us?
Agency in the age of AI is not, at its core, a technology question. It is a human one. And across law firms, accounting houses, actuarial desks, and the laptops of twenty-four-year-olds trying to build careers in knowledge work, the contours of that question are becoming impossible to ignore.
The Rise of Autonomous Agents — And the Hidden Cost to Human Agency
“Agentic AI” is the industry’s term of the moment, and it deserves a plain-language translation: these are AI systems that do not merely answer questions but complete tasks — booking travel, filing documents, auditing spreadsheets, drafting briefs, managing inboxes — with minimal human instruction and, in many configurations, minimal human oversight. OpenAI’s Frontier platform, launched in February 2026 and described as a home for “AI coworkers,” gives enterprises AI systems with shared context, persistent memory, and permissions to act inside live business workflows.
The promise is intoxicating. The average knowledge worker, Silicon Valley’s pitch goes, will soon command a small army of autonomous agents the way a senior partner commands junior associates. Scale your output. Compress your timelines. Democratize expertise.
What this narrative conspicuously omits is what happens to the junior associates.
The hidden cost of autonomous agents is not primarily economic, though the economic costs are real and arriving faster than most forecasts anticipated. It is something harder to quantify and easier to dismiss: the erosion of the conditions under which human agency develops, deepens, and compounds over a life. The young lawyer who never drafts her first clumsy brief. The accountant who never wrestles with his first gnarly audit. The actuary who never builds intuition through the friction of getting it wrong. Agency — the capacity to act, judge, and take meaningful initiative in the world — is not innate. It is cultivated. And the cultivation requires doing the hard, error-prone, occasionally humiliating work that AI agents are now absorbing at scale.
This is not a Luddite argument. It is a developmental one. And it is urgent.
Why Lawyers, Accountants, and Actuaries Are Questioning Their Futures
The conversation has broken into the open in the corridors of professional services with a candor that would have been unthinkable three years ago. Senior partners at major law firms will tell you, off the record, that they have paused or sharply curtailed junior associate hiring. The work that used to season young talent — contract review, discovery, due diligence — is being absorbed by AI agents with an efficiency that makes the economics of junior staffing almost impossible to justify.
The data corroborates what the corridors are whispering. Goldman Sachs Research reported in April 2026 that AI is erasing roughly 16,000 net U.S. jobs per month — approximately 25,000 displaced by AI substitution against 9,000 new positions created by AI augmentation. The occupations most exposed to substitution, Goldman’s economists found, include accountants and auditors, legal and administrative assistants, credit analysts, and telemarketers: precisely the entry-level and mid-career roles that have historically served as the scaffolding of professional development.
The generational impact is particularly sharp. Goldman Sachs found that unemployment among 20- to 30-year-olds in AI-exposed occupations has risen by nearly three percentage points since the start of 2025 — significantly higher than for older workers in the same fields. Entry-level hiring at the top fifteen technology companies fell 25 percent between 2023 and 2024, and continued declining through 2025. The AI-related share of layoffs discussed on S&P 500 earnings calls grew to just above 15 percent by late 2025, up sharply from the year prior.
The career advice for young professionals navigating the AI age in 2026 used to be: develop technical skills, stay adaptable, embrace tools. That advice, while still valid, has become insufficient. What young professionals now face is a more fundamental disruption: the removal of the proving grounds where professional judgment is forged. You cannot develop the discernment of a seasoned litigator if the briefs are always already written. You cannot build the instincts of a skilled auditor if the anomalies are always already flagged.
The global picture adds further texture. In Southeast Asia, AI agents replacing jobs in BPO (business process outsourcing) — a sector employing hundreds of millions across the Philippines, India, and Vietnam — are compressing opportunities for a generation that had, through those very jobs, entered the formal economy and begun building transferable skills. In sub-Saharan Africa, where formal professional employment is expanding and could absorb more talent, the risk is that AI-agent adoption by multinationals shortcircuits the very job categories through which that transition happens. The AI agents replacing lawyers accountants and junior professionals in New York and London do not stay politely within American and European borders.
Pew’s 2025–2026 Data: Americans Demand More Control Over AI
The public has registered its discomfort — clearly, consistently, and in terms that policymakers should find impossible to dismiss.
Pew Research Center’s June 2025 survey of 5,023 U.S. adults found that 50 percent say the increased use of AI in daily life makes them feel more concerned than excited — up from 37 percent in 2021. More than half of respondents (57 percent) rated the societal risks of AI as high, against just 25 percent who say the benefits are similarly high. Majorities reported pessimism about AI’s impact on human creativity (53 percent say it will worsen people’s ability to think creatively) and meaningful relationships (50 percent say it will worsen our capacity to form them).
These are not the views of technophobes. They are the views of citizens watching something happen to their world and struggling to articulate, against the momentum of trillion-dollar valuations and breathless press coverage, what exactly it is they are losing.
The Pew data on control is the most politically significant finding of recent years. Fifty-five percent of U.S. adults say they want more control over how AI is used in their own lives. Among AI experts themselves — people who have built careers in the field — the figure is 57 percent. The demand for human agency in the AI era is not a fringe sentiment or a technophobic reflex. It crosses partisan lines, educational levels, and even the expert-layperson divide. What is remarkable is how little the policy architecture of any major government has responded to it.
In Europe, the EU AI Act has established a framework, but its enforcement mechanisms remain nascent and its treatment of agentic systems is notably underdeveloped for a technology moving at this pace. In the United States, the legislative response has been fragmented, preempted by a political environment in which AI has become entangled with culture-war dynamics that obscure rather than illuminate the actual governance questions. In China, regulatory assertiveness on AI coexists with state-directed deployment that raises its own agency concerns — for the individual citizen, not the system.
The gap between what people want — more control, more say, more human agency in the AI era — and what institutions are delivering is widening. It is into this gap that the next generation of social innovators, philanthropists, and policymakers must step.
Philanthropy’s Critical Role in Shaping AI Guardrails and Opportunity
Here is where the story gets interesting — and where institutional funders, foundations, and philanthropic capital have a genuinely historic role to play that they have, with a handful of exceptions, yet to fully embrace.
The governance of AI — particularly of agentic AI systems acting autonomously in high-stakes domains — cannot be left to the companies building it, to legislators who struggle to define a “large language model” without staff assistance, or to the uncoordinated preferences of individual consumers. The OECD and the World Economic Forum have outlined frameworks, but frameworks without funding are architectural drawings without builders.
Philanthropy AI governance has become one of the most consequential and underfunded intersections in public life. The MacArthur Foundation, Ford Foundation, and a handful of tech-originated donors (Omidyar Network, Schmidt Futures) have begun investing in responsible AI research and policy. But the scale of investment remains dramatically misaligned with the scale of the disruption underway. According to the Brookings Institution, the communities most exposed to AI displacement — lower-income workers, first-generation professionals, workers in routine cognitive roles — are precisely those with the least access to reskilling resources, legal literacy about their rights, and political power to shape the governance conversation.
Philanthropic capital can address this at multiple levels. First, funding public dialogue: creating the forums, commissions, and civic processes through which communities can articulate what they want from AI and what they will not accept — the kind of deliberative democracy that corporate AI development timelines do not organically produce. Second, building ethical guardrails: supporting independent technical audits of AI agent systems, especially those deployed in high-stakes contexts like hiring, credit, legal aid, and healthcare. Third, investing aggressively in reskilling: not the corporate upskilling programs that optimize for the needs of existing employers, but the genuinely human-centered education investments that give people the capacity to navigate a changed economy on their own terms. Fourth, and most visibly, creating opportunity for young people — the generation that stands to be most directly affected by the removal of the proving grounds of professional learning.
The philanthropic AI governance opportunity is not about slowing innovation. It is about ensuring that the benefits of innovation are not captured exclusively by those who already own the infrastructure, while the costs — in disrupted careers, eroded agency, and stunted development — are borne by everyone else.
Reclaiming Agency: What Young People, Leaders, and Funders Must Do Now
The future of human agency in the AI era will not be decided in Palo Alto. It will be decided in classrooms, in courtrooms, in legislative chambers, in the board rooms of foundations, and in the daily choices of individuals about which tasks they hand to machines and which they insist on doing themselves — not because machines cannot do them, but because the doing is the point.
For young professionals — the generation navigating career advice in the AI age of 2026 — the imperative is not to compete with AI agents on their own terms. That is a race designed for machines. The imperative is to cultivate what agents cannot: moral judgment, relational intelligence, contextual wisdom, creative vision, the capacity to care about what you’re doing and why. These are not soft skills. They are the hardest skills. They compound over a lifetime in ways that no model weight or token count does. Protect your learning curve fiercely. Seek out the friction that develops judgment. Resist the temptation to outsource your thinking to systems that are, however impressive, fundamentally indifferent to your growth.
For leaders — in business, government, education, and civil society — the reclamation of agency requires building institutions that are honest about trade-offs. Does AI erode human agency? In its current deployment trajectory: yes, in specific and important ways. The right response is not panic, and it is not denial. It is design. Invest in human-AI collaboration frameworks that genuinely keep humans in the loop, not as a compliance formality but as a developmental reality. Design apprenticeship and mentorship structures that survive the automation of the tasks around which they were traditionally built. Insist on AI impact assessments before deploying agentic systems in professional and educational contexts. Make the question of human development central to every AI deployment decision, not an afterthought.
For funders: this is the decade. The governance architecture being built — or not built — around agentic AI will shape the relationship between human agency and technological systems for a generation. The window for influence is not permanently open. Foundations that move early, with real capital and genuine intellectual seriousness, can help write the rules. Foundations that wait will be left funding the repair.
The global dimension matters here, too. The most consequential AI governance battles of the next decade may not be fought in Washington or Brussels, but in the Global South — in countries where the intersection of demographic youth, expanding educational access, and AI-driven disruption of professional labor markets creates conditions for either extraordinary opportunity or extraordinary waste of human potential. Philanthropic AI governance that ignores Lagos, Jakarta, and São Paulo is not global governance. It is just wealthy-country governance wearing a global mask.
The story Silicon Valley is telling about the age of AI is seductive and, in many of its details, accurate. Autonomous agents will transform professional life. Productivity will rise. Some categories of work will disappear and others will emerge. The arc, the industry insists, bends toward abundance.
What the story omits is the quality of the lives lived along that arc. The lawyer who never argued. The accountant who never judged. The twenty-three-year-old who handed her first decade of professional development to a system that learned everything and taught her nothing.
Agency in the age of AI is not a footnote to the productivity story. It is the story that matters most.
Two tweets launched the age of agentic AI. What we do next — in philanthropy, in policy, in education, in the daily texture of our professional and personal choices — will determine whether this age expands or diminishes what it means to be a capable, purposeful human being.
The question is not what AI agents will do for us. The question is what kind of agents we will choose to become.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
Analysis
Is Anthropic Protecting the Internet — or Its Own Empire?
Anthropic Mythos, the most powerful AI model any lab has ever disclosed, arrived this week draped in the language of altruism. Project Glasswing — the initiative through which a curated circle of Silicon Valley aristocrats gains exclusive access to Mythos — is pitched as an act of civilizational defense. The framing is elegant, the mission is genuinely urgent, and at least part of it is true. But behind the Mythos AI release lies a second story that Dario Amodei’s beautifully worded blog posts conspicuously omit: Mythos is enterprise-only not merely because Anthropic fears hackers, but because releasing it to the open internet would trigger the single greatest act of industrial-scale capability theft in the history of technology. The cybersecurity rationale is real. The economic motive is realer still. Understanding both is how you understand the AI industry in 2026.
What Anthropic Mythos Actually Does — and Why It Terrified Silicon Valley
To appreciate the gatekeeping, you must first reckon with the capability. Mythos is not an incremental model. It occupies an entirely new tier in Anthropic’s architecture — internally designated Copybara — sitting above the public Haiku, Sonnet, and Opus hierarchy that most developers work with. SecurityWeek’s detailed technical breakdown describes it as a step change so pronounced that calling it an “upgrade” is like calling the internet an “improvement” on the fax machine.
The numbers are staggering. Anthropic’s own Frontier Red Team blog reports that Mythos autonomously reproduced known vulnerabilities and generated working proof-of-concept exploits on its very first attempt in 83.1% of cases. Its predecessor, Opus 4.6, managed that feat almost never — near-0% success rates on autonomous exploit development. Engineers with zero formal security training now tell colleagues of waking up to complete, working exploits they’d asked the model to develop overnight, entirely without intervention. One test revealed a 27-year-old bug lurking inside OpenBSD — an operating system historically celebrated for its security — that would allow any attacker to remotely crash any machine running it. Axios reported that Mythos found bugs in every major operating system and every major web browser, and that its Linux kernel analysis produced a chain of vulnerabilities that, strung together autonomously, would hand an attacker complete root control of any Linux system.
Compare that to Opus 4.6, which found roughly 500 zero-days in open-source software — itself a remarkable achievement. Mythos found thousands in a matter of weeks. It then attempted to exploit Firefox’s JavaScript engine and succeeded 181 times, compared to twice for Opus 4.6.
This is also, importantly, what a Claude Mythos vs open source cybersecurity comparison looks like at full resolution: no freely available model comes remotely close, and Anthropic knows it. That gap is the entire product.
The Official Narrative: “We’re Protecting the Internet”
The Anthropic enterprise-only AI decision is framed through Project Glasswing as a coordinated defensive effort — an attempt to patch the world’s most critical software before capability equivalents proliferate to hostile actors. Anthropic’s official Glasswing page commits $100 million in usage credits and $4 million in direct donations to open-source security organizations, with founding partners that read like a geopolitical alliance: Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, and Palo Alto Networks. Roughly 40 additional organizations maintaining critical software infrastructure also gain access. The initiative’s name — Glasswing, after a butterfly whose transparency makes it nearly invisible — is a metaphor for software vulnerabilities that hide in plain sight.
The security rationale for why Anthropic limited Mythos is not confected. In September 2025, a Chinese state-sponsored threat actor used earlier Claude models in what SecurityWeek documented as the first confirmed AI-orchestrated cyber espionage campaign — not merely using AI as an advisor but deploying it agentically to execute attacks against roughly 30 organizations. If that was possible with Claude’s then-current models, what becomes possible with a model that autonomously chains Linux kernel exploits at a near-perfect success rate?
Anthropic’s Logan Graham, head of the Frontier Red Team, captured the threat succinctly: imagine this level of capability in the hands of Iran in a hot war, or Russia as it attempts to degrade Ukrainian infrastructure. That is not science fiction. It is the calculus driving the controlled release. Briefings to CISA, the Commerce Department, and the Center for AI Standards and Innovation are real, however conspicuously absent the Pentagon remains from those conversations — a pointed omission given Anthropic’s ongoing legal war with the Defense Department over its blacklisting.
So yes: the security case is genuine. But it is, at most, half the story.
The Distillation Flywheel: Why Frontier Labs Are Really Gating Their Best Models
Here is the economic argument that no TechCrunch brief or Bloomberg data point has assembled cleanly: Anthropic model distillation is an existential threat to the frontier lab business model, and Mythos is as much a response to that threat as it is a cybersecurity initiative.
The mathematics of adversarial distillation are brutally asymmetric. Training a frontier model costs approximately $1 billion in compute. Successfully distilling it into a competitive student model costs an adversary somewhere between $100,000 and $200,000 — a 5,000-to-one cost advantage in the favor of the copier. No rate-limiting policy, no terms-of-service clause, and no click-through agreement closes that gap. The only defense is controlling access to the teacher in the first place.
Frontier lab distillation blocking is not a new concern, but 2026 has given it terrifying specificity. Anthropic publicly disclosed in February that three Chinese AI laboratories — DeepSeek, Moonshot AI, and MiniMax — collectively generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts. MiniMax alone accounted for 13 million of those exchanges; Moonshot AI added 3.4 million; DeepSeek, notably, needed only 150,000 because it was targeting something far more specific: how Claude refuses things — alignment behavior, policy-sensitive responses, the invisible architecture of safety. A stripped copy of a frontier model without its alignment training, deployed at nation-state scale for disinformation or surveillance, is the nightmare scenario that animated Anthropic’s founding. It may now be unfolding in real time.
What does this have to do with Mythos being enterprise-only? Everything. A model that autonomously writes working exploits for every major OS would, if released via standard API access, provide Chinese distillation campaigns with not just conversational capability but offensive cyber capability — the very thing that makes Mythos commercially unique. Releasing Mythos at scale would be, simultaneously, the greatest act of market self-destruction and the greatest gift to adversarial state actors in the history of enterprise software. Enterprise-only access eliminates both risks at once: it monetizes the capability at maximum margin while denying it to the distillation ecosystem.
This is the distillation flywheel in action. Frontier labs gate the highest-capability models behind enterprise contracts; enterprises pay premium rates for exclusive capability access; the revenue funds the next generation of training runs; the new model is again too powerful to release openly. Each rotation of the wheel deepens the competitive moat, raises the enterprise price floor, and tightens the grip of the three dominant labs over the global AI stack.
Geopolitics at the Model Layer: The Three-Lab Alliance and the New AI Cold War
The Mythos security exploits announcement arrived within 24 hours of a Bloomberg-reported development that is arguably more consequential for the global technology order: OpenAI, Anthropic, and Google — three companies that have spent the better part of three years competing to annihilate each other — began sharing adversarial distillation intelligence through the Frontier Model Forum. The cooperation, modeled on how cybersecurity firms exchange threat data, represents the first substantive operational use of the Forum since its 2023 founding.
The breakdown of what each Chinese lab extracted from Claude reveals something remarkable: three entirely different product strategies, fingerprinted through their query patterns. MiniMax vacuumed broadly — generalist capability extraction at scale. Moonshot AI targeted the exact agentic reasoning and computer-use stack that its Kimi product has been marketing since late 2025. DeepSeek, with a comparatively tiny 150,000-exchange footprint, was almost exclusively interested in Claude’s alignment layer — how it handles policy-sensitive queries, how it refuses, how it behaves at the edges. Each lab was essentially reverse-engineering not just a model but a business plan.
The MIT research documented in December 2025 found that GLM-series models identify themselves as Claude approximately half the time when queried through certain paths — behavioral residue of distillation that no fine-tuning has fully scrubbed. US officials estimate the financial toll of this campaign in the billions annually. The Trump administration’s AI Action Plan has already called for a formal inter-industry sharing center, essentially institutionalizing what the labs are now doing informally.
The geopolitical stakes here extend far beyond corporate IP. When DeepSeek released its R1 model in January 2025 — a model widely believed to incorporate distilled knowledge from OpenAI’s infrastructure — it erased nearly $1 trillion from US and European tech stocks in a single trading session. Markets now understand something that policymakers are only beginning to grasp: control over frontier AI model capabilities is a form of strategic leverage, and distillation is a vector for transferring that leverage without a single line of export-controlled chip silicon crossing a border.
Enterprise Contracts and the New AI Treadmill
The economics of Anthropic enterprise-only AI are becoming increasingly clear as 2026 revenue data enters the public domain.
| Metric | February 2026 | April 2026 |
|---|---|---|
| Anthropic Run-Rate Revenue | $14B | $30B+ |
| Enterprise Share of Revenue | ~80% | ~80% |
| Customers Spending $1M+ Annually | 500 | 1,000+ |
| Claude Code Run-Rate Revenue | $2.5B | Growing rapidly |
| Anthropic Valuation | $380B | ~$500B+ (IPO target) |
| OpenAI Run-Rate Revenue | ~$20B | ~$24-25B |
Sources: CNBC, Anthropic Series G announcement, Sacra
Anthropic’s annualized revenue has now surpassed $30 billion — having started 2025 at roughly $1 billion — representing one of the most dramatic B2B revenue trajectories in the history of enterprise software. Sacra estimates that 80% of that revenue flows from business clients, with enterprise API consumption and reserved-capacity contracts forming the structural backbone. Eight of the Fortune 10 are now Claude customers. Four percent of all public GitHub commits are now authored by Claude Code.
What Project Glasswing does, in this context, is elegant: it creates a new category of enterprise relationship — not API access, not subscription, but strategic partnership with a frontier safety lab deploying the world’s most capable unrestricted model. The 40 organizations in the Glasswing program are not merely beta testers. They are, from a revenue architecture standpoint, being trained — habituated to Mythos-class capability before it becomes generally available, embedded in their security workflows, their CI/CD pipelines, their vulnerability management systems. By the time Mythos-class models are released at scale with appropriate safeguards, the switching cost will be prohibitive.
This is the AI treadmill: each generation of frontier capability, released exclusively to enterprise partners first, creates a loyalty layer that commoditized open-source alternatives cannot easily displace. The $100 million in Glasswing credits is not charity. It is customer acquisition at an unprecedented model tier.
The Counter-View: Responsible Deployment Has a Principled Case
It would be intellectually dishonest to leave the distillation-flywheel critique standing without challenge. The counter-argument is real, and it deserves full articulation.
Platformer’s analysis makes the most compelling version of the responsible-rollout defense: Anthropic’s founding premise was that a safety-focused lab should be the first to encounter the most dangerous capabilities, so it could lead mitigation rather than react to catastrophe. With Mythos, that appears to be exactly what is happening. The company did not race to monetize these cybersecurity capabilities. It briefed government agencies, convened a defensive consortium, committed $4 million to open-source security projects, and staged rollout behind a coordinated patching effort. The vulnerabilities Mythos found in Firefox, Linux, and OpenBSD are being disclosed and patched before the paper trail of their discovery becomes public — precisely the protocol that responsible security research demands.
Alex Stamos, whose expertise in adversarial security spans decades, offered the optimistic framing: if Mythos represents being “one step past human capabilities,” there is a finite pool of ancient flaws that can now be systematically found and fixed, potentially producing software infrastructure more fundamentally secure than anything achievable through traditional auditing. That is not corporate spin. It is a coherent theory of defensive AI benefit.
The Mythos AI release strategy also reflects a genuinely novel regulatory challenge: the EU AI Act’s next enforcement phase takes effect August 2, 2026, introducing incident-reporting obligations and penalties of up to 3% of global revenue for high-risk AI systems. A general release of Mythos into that environment — without governance infrastructure in place — would be commercially catastrophic as well as potentially harmful. Enterprise-gated release buys time for both the regulatory and technical scaffolding to mature.
What Regulators and Open-Source Advocates Must Do Next
The policy implications of Anthropic Mythos extend far beyond one company’s release strategy. They illuminate a structural shift in how frontier AI capability is being distributed — and by whom, and to whom.
For regulators, the Glasswing model raises questions that existing frameworks cannot answer. If a private company now possesses working zero-day exploits for virtually every major software system on earth — as Kelsey Piper pointedly observed — what obligations of disclosure and oversight apply? The fact that Anthropic is briefing CISA and the Center for AI Standards and Innovation is encouraging, but voluntary briefings are not governance. The EU’s AI Act and the US AI Action Plan both need explicit provisions covering what happens when a commercially controlled lab becomes the de facto custodian of the world’s most significant vulnerability database.
For open-source advocates, the distillation dynamic poses an existential dilemma. The same economic logic that drives labs to gate Mythos also drives them to resist open-weights releases of any model that approaches frontier capability. The three-lab alliance against Chinese distillation is, viewed from a certain angle, also an alliance against open-source proliferation of frontier capability — regardless of the nationality of the developer doing the distilling. Open-source foundations, university research labs, and sovereign AI initiatives in Europe, the Middle East, and South Asia should be pressing hard for access frameworks that allow defensive cybersecurity use of frontier capability without being filtered through the commercial relationships of Silicon Valley.
For enterprise decision-makers, the message is unambiguous: the organizations that embed Mythos-class capability into their vulnerability management workflows now will hold a structural security advantage — measured in patch latency and zero-day coverage — over those that wait for open-source equivalents. But that advantage comes with dependency on a single private entity whose political entanglements, from Pentagon disputes to Chinese state-actor confrontations, introduce supply-chain risks that no CISO should ignore.
Anthropic may well be protecting the internet. It is certainly protecting its empire. In 2026, those two imperatives have become so entangled that distinguishing them may be the most important work left for anyone who cares about who controls the infrastructure of the digital world.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
-
Markets & Finance3 months agoTop 15 Stocks for Investment in 2026 in PSX: Your Complete Guide to Pakistan’s Best Investment Opportunities
-
Analysis2 months agoBrazil’s Rare Earth Race: US, EU, and China Compete for Critical Minerals as Tensions Rise
-
Analysis2 months agoTop 10 Stocks for Investment in PSX for Quick Returns in 2026
-
Banks3 months agoBest Investments in Pakistan 2026: Top 10 Low-Price Shares and Long-Term Picks for the PSX
-
Investment3 months agoTop 10 Mutual Fund Managers in Pakistan for Investment in 2026: A Comprehensive Guide for Optimal Returns
-
Global Economy4 months agoPakistan’s Export Goldmine: 10 Game-Changing Markets Where Pakistani Businesses Are Winning Big in 2025
-
Asia4 months agoChina’s 50% Domestic Equipment Rule: The Semiconductor Mandate Reshaping Global Tech
-
Global Economy4 months ago15 Most Lucrative Sectors for Investment in Pakistan: A 2025 Data-Driven Analysis
