AI
Could AI’s Leading Men Become as Powerful as Ford or Rockefeller? For Now, They Are Still a Long Way Behind.
The five men reshaping intelligence — Dario Amodei, Demis Hassabis, Elon Musk, Mark Zuckerberg, and Sam Altman — command wealth, attention, and technological leverage that no previous generation of innovators has enjoyed. Yet the distance between their present dominance and the systemic, civilization-bending grip once exercised by John D. Rockefeller or Henry Ford remains vast — and poorly understood.
Imagine a boardroom meeting in 2035. The agenda is simple: who controls the infrastructure of thought itself? A decade earlier, five men launched what many called the most consequential technological disruption since electricity. By 2026, their companies had collectively captured trillions of dollars in market value, reshaped labor markets across three continents, and triggered geopolitical confrontations from Brussels to Beijing. And yet, if you measure their power by the standards history reserves for its true industrial titans — the men who didn’t just build industries but became them — the five AI leading men of our era still have a very long way to go.
That is not a comfortable argument to make. The numbers alone seem to render it absurd. Elon Musk’s net worth now exceeds $811 billion, a figure that surpasses the GDP of Poland. Musk’s February 2026 all-stock merger of SpaceX and xAI created a combined entity valued at $1.25 trillion — a single transaction larger than the entire U.S. defense budget. OpenAI, now valued at approximately $500 billion, counts some 800 million weekly active users of ChatGPT, a number that would have seemed science fiction five years ago. Anthropic — founded by Dario Amodei and his sister Daniela — reached a valuation of $380 billion in early 2026, while Meta has committed to spending $115 to $135 billion in capital expenditure in 2026 alone, with an astonishing $600 billion pledged toward data centers through 2028.
These are not ordinary fortunes. They are structurally new categories of wealth concentration. And still, the Rockefeller comparison fails — and fails instructively.
What Made a Tycoon a Tycoon: The Three Pillars of Historical Power
To understand why AI tycoons remain a long way behind their Gilded Age predecessors, one must first understand what actually made Rockefeller and Ford so uniquely dangerous to the social order of their time. It was not simply their wealth. Adjusted for GDP, Rockefeller’s peak fortune has been estimated at roughly $400 billion in today’s dollars — comfortably surpassed by Musk. What made Standard Oil a civilizational force was something more specific and more structural: the simultaneous control of physical infrastructure, political capture, and cultural monopoly.
Rockefeller didn’t just refine oil; he controlled approximately 91% of United States oil refining capacity by the mid-1880s through ownership of the pipelines, the railroad rebates, and the pricing mechanisms that every competitor had to use to survive. He didn’t lobby Congress — he owned the conversation. Ford, similarly, didn’t just manufacture cars; he built company towns, set wages for an entire economy, and deployed a private security apparatus — the Ford Service Department — to enforce his will on a captive workforce. Both men bent the physical world to their models in ways that left no exit for competitors, workers, or governments.
That is the three-pillar framework that the AI quintet has not yet replicated: physical infrastructure lock-in, political capture, and cultural monopoly. The gap between aspiration and achievement on each of these dimensions is the real story of power in 2026.
Infrastructure: Who Controls the Pipes?
The most important question in any era of technological transformation is not who builds the smartest machine, but who controls the plumbing. Rockefeller’s genius was not chemistry — it was logistics. He understood that the pipeline was more powerful than the refinery.
In the AI economy, the equivalent of the pipeline is the data center, the chip, and the undersea cable. Here the picture for the quintet is mixed at best. Mark Zuckerberg’s Meta is building on the most ambitious scale — two mega-clusters that dwarf any corporate construction project in a generation — but the silicon in those data centers is manufactured almost entirely by NVIDIA, a company none of the five control. Musk’s SpaceX-xAI merger is the most vertically integrated attempt to replicate Rockefeller’s pipeline logic: orbital data centers fed by Starlink satellites, in theory giving xAI the physical substrate to train and deploy models without dependence on third-party cloud providers. But as of 2026, that vision remains largely prospective. xAI’s Grok competes credibly against ChatGPT and Claude, but it does not yet possess the proprietary infrastructure advantage that would make it structurally inescapable.
Sam Altman, for his part, has no direct equity in OpenAI, earning a nominal salary of roughly $65,000 per year. His influence derives almost entirely from his position at the helm of the world’s most recognizable AI brand — a form of power that is real, but brittle. The moment a better or cheaper model displaces GPT, the institutional moat begins to crack. Rockefeller, by contrast, had no such vulnerability: he owned the pipes regardless of whose oil flowed through them.
Dario Amodei’s Anthropic presents a different case. With a $380 billion valuation, enterprise AI revenues reportedly growing at exponential rates, and a model — Claude — that has captured an estimated 40% of enterprise large language model spending in the United States, Anthropic is the most quietly formidable player in the quintet. Amodei has also demonstrated a rare form of institutional courage: in February 2026, he refused a Pentagon demand to remove contractual prohibitions on Claude’s use for mass domestic surveillance, even as the Trump administration labeled Anthropic a “supply-chain risk” and ordered agencies to stop using the model. That is not the behavior of a man who has captured the state. It is the behavior of a man trying not to be captured by it.
Political Power: Proximity Is Not Capture
The AI leading men have achieved unprecedented proximity to political power. Altman donated to Trump’s inaugural fund, sat on San Francisco’s mayoral transition team, and has testified repeatedly before Congress. Musk, as an architect of the Department of Government Efficiency, has arguably achieved more direct influence over federal bureaucracy than any private citizen since Bernard Baruch. Zuckerberg has reoriented Meta’s content moderation in ways that reflect political calculation as much as principled policy.
And yet proximity is not capture. Rockefeller’s Standard Oil didn’t merely lobby regulators — it effectively set the regulatory agenda in oil-producing states for two decades. The steel and railroad barons didn’t just meet with senators; they funded them in ways that made legislative independence a legal fiction.
Today’s AI executives remain subject to forces their predecessors never faced. The European Union’s AI Act imposes binding constraints that no 19th-century robber baron ever encountered. Antitrust scrutiny from both the Department of Justice and the EU threatens the integration strategies of both Google DeepMind and Meta. Anthropic’s standoff with the Pentagon demonstrates that even the most safety-focused AI lab cannot escape the gravitational pull of geopolitical competition. The five men are powerful political actors — but they are actors on a stage with many more directors than Rockefeller ever faced.
The Cognition Economy: A New Kind of Monopoly Risk
Where the AI quintet is converging toward something genuinely Rockefellerian is in what might be called the cognition economy — the emerging marketplace where intelligence itself, not oil or steel, is the resource being extracted, refined, and sold.
Demis Hassabis, the Nobel Prize–winning CEO of Google DeepMind, said at Davos 2026 that today’s AI systems are “nowhere near” human-level AGI, placing the milestone at “five to ten years” away. Amodei, characteristically more bullish, has predicted that AI will reach “Nobel-level” scientific research capability within two years, and has described the coming AI cluster as “a country of geniuses in a data center” running at superhuman speeds. If either is even partially correct, the downstream consequences for labor markets, knowledge production, and institutional power are more profound than anything the Industrial Revolution generated.
The danger is not that one of these five men will own the world’s intelligence outright. It is that the economic logic of AI — massive upfront compute costs, proprietary training data, and compounding capability advantages — tends toward the same concentration dynamics that produced Standard Oil. A model that is marginally better attracts more users; more users generate more data; more data enables further improvement; the loop closes. This is not metaphor. Meta’s Llama 5, released in April 2026, was explicitly designed to commoditize proprietary AI — Zuckerberg’s theory being that if intelligence becomes free, the company that distributes it through 3.5 billion social media users wins by default. That is not so different from Rockefeller’s insight that the real money was never in the oil itself, but in making yourself indispensable to everyone who wanted to transport it.
Cultural Monopoly: The Unfinished Frontier
Henry Ford didn’t just build cars. He built a culture. The five-dollar day, the $40 workweek — Ford shaped how Americans understood the relationship between labor, leisure, and consumption. His prejudices, published in the Dearborn Independent and later praised by Adolf Hitler, exercised a cultural influence that no modern tech executive has approached, for better or for worse.
The AI quintet has, so far, produced nothing comparable to that kind of cultural ownership. ChatGPT is used by hundreds of millions, but it has not yet redefined the terms of civic life in the way that Ford’s assembly lines redefined time itself. The AI leading men give TED talks and publish essays — Amodei’s “Machines of Loving Grace” and its sequel “The Adolescence of Technology” are genuine intellectual contributions — but they have not yet built the durable cultural institutions that the Carnegies and Fords used to launder their economic power into social legitimacy. The Carnegie libraries are still standing. The Ford Foundation still funds democracy initiatives. What will Sam Altman’s equivalent be? We do not yet know.
This gap may close faster than we expect. If AI agents do begin displacing 50% of white-collar jobs — as Amodei and others predict within five years — the resulting social disruption will demand new cultural narratives. The men who shape those narratives will wield a form of power that makes their current wealth look like a down payment.
Why the Gap Matters — And Why It Is Narrowing
The distance between the AI tycoons of 2026 and the historical robber barons is real, but it is not permanent. Three trends are accelerating the convergence.
First, physical infrastructure is being built at unprecedented speed. Meta’s $600 billion data center pledge, Musk’s orbital computing vision, and the arms-race dynamics of semiconductor procurement are creating the structural lock-in that historically defines industrial monopoly. The company that owns the compute wins — not just the model race, but the infrastructure race.
Second, regulatory arbitrage is becoming a competitive strategy. Just as Rockefeller used the legal patchwork of late-19th-century interstate commerce to outmaneuver state-level regulators, AI companies are exploiting the gap between national regulatory frameworks to deploy capabilities that no single jurisdiction can constrain. The Trump administration’s rollback of Biden-era AI safety executive orders has already opened space for more aggressive deployment by American companies.
Third, the feedback loops of AI capability are compounding in ways that no previous technology has. When Anthropic’s own engineers have largely stopped writing code themselves — directing AI-generated code as product managers rather than authors — the productivity advantages of leading AI labs over their competitors begin to resemble Standard Oil’s pipeline advantages over independent refiners. Not yet identical. But structurally rhyming.
The View from 2035: A Question of Institutions
The most important distinction between Ford, Rockefeller, and today’s AI leading men may ultimately be institutional rather than technological. The Gilded Age tycoons operated in a world with weak antitrust frameworks, no administrative state to speak of, and a political economy that had not yet developed the tools to constrain concentrated private power. The Progressive Era — Teddy Roosevelt’s trust-busting, the Sherman Act, the eventual dissolution of Standard Oil — was the institutional response. It took a generation.
We may be at the beginning of a similar reckoning. Whether the five men who currently lead the AI revolution become as powerful as Ford or Rockefeller depends less on their own ambitions — which are extraordinary — than on the speed and coherence of the institutional response. Policymakers who wait for the infrastructure to be fully built before acting will find themselves in the same position as the regulators who confronted Standard Oil in 1911: arriving at the scene of a revolution already completed.
The AI leading men are not, today, as powerful as Rockefeller. But they are building the conditions under which someone very like them could be. That is the moment for executives, investors, and policymakers to pay attention — not when the resemblance is complete, but now, while the architecture is still under construction and the pipes have not yet been welded shut.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
AI
The Mythos Meeting: Anthropic’s Dangerous AI and the White House’s Calculated Gamble | 2026
The Amodei–Wiles meeting signals a seismic U.S. AI policy pivot. Why Washington is now courting the Anthropic Mythos model it once tried to destroy.
Imagine the scene: a Friday afternoon in the West Wing, the air carrying the particular weight of decisions that cannot be undecided. Dario Amodei, the quietly intense CEO of Anthropic, sits across from Susie Wiles, the White House Chief of Staff whose political instincts are said to be the closest thing to a gyroscope this administration possesses. Between them, unspoken but omnipresent, is a question that has convulsed Washington’s national-security establishment for weeks: what do you do with an AI so dangerous that even its creators are frightened of it—and so potent that refusing to use it might be the most reckless choice of all?
That meeting, confirmed by Axios, CNN, and the Associated Press, is not merely a diplomatic thaw between a tech company and its government tormentor. It is the moment Washington finally admitted what it has known all along: that frontier AI has outrun every framework, every regulation, and every posture of ideological hostility that American politics could muster. The implications—for U.S. national security, for the global AI arms race, and for the governance of technology at civilizational scale—are seismic.
What Mythos Is, and Why It Terrifies the People Paid to Worry
To understand the Dario Amodei–Susie Wiles meeting and its national security implications, you must first understand what Anthropic’s Claude Mythos Preview actually does. Launched on April 7, 2026, Mythos is not a chatbot upgrade. It is, in the judgment of the cybersecurity community, a watershed event—a model of such extraordinary capability in identifying software vulnerabilities that it reportedly discovered thousands of zero-day flaws across major operating systems and browsers before breakfast.
Anthropic’s co-founder and policy chief Jack Clark, speaking at the Semafor World Economy Conference this week, described Mythos as having capabilities that could pose “severe” fallout for public safety, national security, and the economy. Washington Times He was not speaking hyperbolically. He was warning. Clark added that Mythos is not a “special model”—”there will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities.” PBS
This is the paradox that has split Washington clean in two. Mythos can map the defensive perimeter of any digital system with an acuity no human team could match. It can find the crack in the levy before the flood. But it can also—in theory, in the wrong hands, with the wrong prompts—hand an adversary the blueprint for that same attack. Its Mythos tool can identify cybersecurity threats but also present a roadmap for hackers to attack companies or the government. CNN One U.S. official, in a phrase that deserves to be carved somewhere permanent, told Axios: “They’re using this Mythos cyber weapon to find friendly ears in the government. They’re succeeding.” Axios
Recognizing this dual-use reality, Anthropic did not release Mythos publicly. Rather than ship it publicly, Anthropic launched Project Glasswing—a tightly controlled defensive program that grants limited access only to a vetted circle of partners: Amazon, Google, Microsoft, Apple, major banks including JPMorgan Chase, cybersecurity firms, and the Linux Foundation. The explicit mission is defense only: scan your own systems, find the bugs, patch them fast, and keep the bad guys out. Zero Hedge Anthropic also pledged up to $100 million in usage credits and $4 million in donations to open-source security groups.
It is, by any reckoning, an extraordinary act of self-regulation from a private company. It is also the act that made the U.S. government desperate to get inside the tent.
The Meeting: What We Know, and What It Really Means
The meeting, first reported by Axios, comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize potential risks. It marks a breakthrough in Amodei’s effort to resolve the company’s bitter AI fight with the Pentagon. Axios
The White House said the meeting was “introductory,” calling it “productive and constructive.” “We discussed opportunities for collaboration, as well as shared approaches and protocols to address the challenges associated with scaling this technology,” the White House said in a statement. “The conversation also explored the balance between advancing innovation and ensuring safety.” CNN
The diplomatic language obscures the pressure beneath. Treasury Secretary Scott Bessent joined the meeting, a notable escalation of seniority. “This is a big problem. Everyone’s complaining. There’s all this drama. So this got elevated to Susie to hear Dario out, determine what is bullsh-t and start to plot a way forward,” a Trump adviser told Axios. Axios
Those familiar with the negotiations describe what the White House is actually seeking: next steps are expected to be about how government departments engage with Anthropic’s new Mythos Preview model. Axios This is not abstract policy discussion. Some government agencies want access, and the White House and Anthropic are discussing the terms under which that might be possible. Two sources told Axios there are ongoing discussions, and agencies may get access to Mythos in the coming weeks. Axios
What Amodei wants in return is equally clear. He has drawn two lines in the sand that have proved non-negotiable: no use of Claude for mass domestic surveillance, and no deployment in fully autonomous weapons systems. Amodei noted that Anthropic has proactively deployed its models to the Department of War and the intelligence community, and was the first frontier AI company to deploy models in the U.S. government’s classified networks and at the National Laboratories. Attack of the Fanboy The Pentagon’s position—that it needs AI available for “all lawful purposes” without carve-outs—strikes many observers outside the building as, at minimum, an extraordinary demand to make of a private-sector partner.
From Pentagon Blacklist to White House Courtship: The Policy U-Turn
The speed of this reversal deserves its own chapter in any future history of American governance.
In late February, President Trump directed federal agencies to stop using Anthropic’s technology. In early March, the Defense Department formally designated Anthropic a supply-chain risk, effectively blocking its models from use on Pentagon contracts. CNN The designation—previously reserved for companies with ties to foreign adversaries—was applied to a San Francisco AI safety company because it refused to remove ethical guardrails. A federal judge in California, granting Anthropic a preliminary injunction, wrote that “nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”
Yet even as that legal fight raged, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned executives from JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley and urged them to use Anthropic’s new Mythos model to detect cybersecurity vulnerabilities in their systems. The Next Web The left hand of government was blacklisting what the right hand was urgently deploying.
Key officials in the Trump administration see Anthropic and its leaders as woke doomsters, and some relished slapping on the “supply chain risk” designation. But some of those same officials, and many others, also see Anthropic’s tools as best-in-class when it comes to AI for national security purposes. One Defense official told Axios at the height of the Pentagon-Anthropic feud that the only reason the talks were ongoing was: “these guys are that good.” Axios
This is the grotesque comedy—and the cold logic—of American AI policy in 2026. Ideological hostility colliding with operational necessity. The government cannot afford the luxury of its own grievance.
Geopolitical Stakes: China, Europe, and the New AI Arms Race
The Dario Amodei Susie Wiles meeting on AI national security cannot be understood outside its broader geopolitical frame. Jack Clark’s comment at Semafor was not idle—it was a countdown. A source close to negotiations told Axios: “It would be grossly irresponsible for the U.S. government to deprive itself of the technological leaps that the new model presents. It would be a gift to China.” Axios
China’s AI labs—DeepSeek, Zhipu, Baidu’s ERNIE—are advancing at a pace that was unimaginable eighteen months ago. The release of DeepSeek’s R1 model in early 2025 rattled markets and shattered the comfortable assumption that America’s compute advantage translated automatically into a capability lead. Beijing’s military-civil fusion doctrine means that any advance in Chinese commercial AI carries direct implications for the People’s Liberation Army. Anthropic has passed up several hundred million dollars to cut off use of Claude by firms linked to the Chinese Communist Party and shut down CCP-sponsored cyberattacks that attempted to abuse the system. Attack of the Fanboy
Europe, for its part, is watching from a peculiar position: deeply invested in AI safety regulation through the EU AI Act, yet without a frontier model lab of its own capable of matching Anthropic, OpenAI, or Google DeepMind. The UK’s NCSC and regulators are scrambling to assess Mythos’s risk profile. The asymmetry is uncomfortable: American and Chinese labs are racing to build and deploy the most powerful AI systems the world has seen, while Europe writes governance frameworks for systems that are already obsolete by the time the ink dries.
In this context, the U.S. government’s approach to Anthropic’s Mythos Preview and cybersecurity defense is not merely domestic policy. It is a strategic posture in a new kind of arms race—one where the weapons are invisible, the battlefield is software infrastructure, and the most dangerous adversary may be inaction itself.
The Opinion: Washington Must Choose
Let me say plainly what the diplomatic language of this week’s meetings cannot: the United States government does not have a coherent AI strategy. It has a collection of competing institutional impulses—the Pentagon’s maximalism, the intelligence community’s pragmatism, the Treasury’s alarm about financial infrastructure, and the White House’s moment-to-moment political management—loosely tethered by the fiction of a unified executive branch.
The Anthropic Mythos White House access negotiations expose this incoherence in full. A company is simultaneously being sued by one arm of the government and being courted by three others. The same model is being called a national-security threat and a national-security imperative, often by people in the same building. This is not policy. It is cognitive dissonance with a budget.
What Washington must do—and what this meeting, however “introductory,” at least gestures toward—is make a choice. Either frontier AI labs like Anthropic are strategic national assets to be cultivated under a framework of responsible access and negotiated guardrails, or they are private entities whose autonomy makes them inherently adversarial to state power. You cannot hold both positions at once, regardless of how many executive orders you issue.
The Anthropic model—safety-conscious development, controlled deployment through Project Glasswing, categorical refusal of certain military applications—is not naïveté. It is a serious attempt to thread a needle that governments have proven incapable of threading themselves. The Pentagon’s insistence on unrestricted access is not hardheadedness. It is institutional anxiety dressed as operational necessity. Between these poles, there is a deal to be made. But making it requires the kind of institutional self-honesty that bureaucracies resist until the cost of denial becomes catastrophic.
The cost is visible. Civilian agencies like the Departments of Energy and Treasury are responsible for safeguarding critical sectors like the electric grid and financial system. Axios Those systems are being probed, daily, by adversaries who will not wait for Washington to resolve its internal politics. Every week the impasse continues is a week the electric grid goes unscanned, the financial system goes unpatched, and the advantage shifts.
What Comes Next: For Regulators, Enterprises, and Citizens
The practical near-term architecture of whatever deal emerges from the Mythos negotiations is beginning to take shape. An internal Office of Management and Budget memo lays out strict protocols for safe access, data handling, and usage limits so that major departments can deploy Mythos against their own sprawling digital estates. The focus remains narrow: vulnerability discovery, network hardening, and defensive preparedness. Zero Hedge
For enterprises, the implications of Anthropic’s Mythos model for cybersecurity defense extend well beyond Washington. If Project Glasswing’s 40-plus organizations can use Mythos to discover and patch vulnerabilities faster than adversaries can exploit them, the model for critical infrastructure protection changes fundamentally. Security becomes proactive rather than reactive. The question is whether the access framework can scale—and whether Anthropic can maintain meaningful guardrails as it does.
A real compromise would likely mean granting Anthropic broader federal access for cybersecurity and software testing while preserving the safety commitments the company says define the product. For Washington, the tradeoff is stark: use a powerful model to harden government systems, or pressure the company to weaken the very restraints that make its technology acceptable in the first place. Prism News
For citizens, this matters in ways that extend far beyond any individual’s awareness of AI policy. The security of the national power grid, the integrity of the financial system, the resilience of government networks—these are not abstract concerns. They are the infrastructure on which daily life depends. The Mythos Preview is not, in the end, a tech industry story. It is a story about who gets to decide how the most powerful tools in human history are deployed, and under what terms.
The Kicker: The Future Is Already in the Room
Here is what the optimists and the catastrophists both miss: the most important fact about this moment is not that Anthropic’s Mythos model exists, nor that the White House is courting it, nor even that China is close behind. The most important fact is that every frontier model released from here forward will carry something like Mythos’s capabilities. The Pandora’s box is already open. The question is not whether to touch what’s inside. The question is whether to pick it up with gloves on—or with bare hands.
The Amodei-Wiles meeting, whatever its immediate outcome, represents the first serious acknowledgment by the American executive branch that the era of AI as an abstract policy problem is over. The technology is here, it is geopolitically consequential, and it will not wait for regulatory consensus. Washington can lead this transition with deliberate guardrails and structured public-private partnership, or it can continue managing it through institutional contradiction and inter-agency feuding until an adversary—human or algorithmic—exploits the gap.
The Friday meeting in the West Wing was quiet. But the decisions made in its aftermath will be anything but.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
Analysis
Wall Street Is Betting Against Private Credit — and That Should Worry Everyone
When the architects of the private credit boom begin selling instruments that profit from its distress, the market has entered a new and more dangerous phase.
There is an old rule of thumb in credit markets: the moment the banks that helped build a structure start quietly pricing in its failure, it is time to pay very close attention. That moment arrived on April 13, 2026, when the S&P CDX Financials Index — ticker FINDX — began trading, giving Wall Street its first standardised credit-default swap benchmark explicitly linked to the private credit market. JPMorgan Chase, Bank of America, Barclays, Deutsche Bank, Goldman Sachs, and Morgan Stanley are all distributing the product. These are not peripheral players hedging tail risks. These are the same institutions that have spent a decade co-investing in, lending to, and marketing the very asset class they now offer clients a streamlined mechanism to short.
That is the headline. The deeper story is more unsettling.
The Product Nobody Was Supposed to Need
Credit-default swaps are, at their most basic, financial insurance contracts — the buyer pays a premium; the seller compensates the buyer if a specified borrower defaults. They became infamous in 2008, when an entire shadow banking system imploded partly because CDS had been written so liberally, by parties with no direct exposure to the underlying risk, that protection was illusory rather than real. What is remarkable about the CDX Financials launch is not the instrument itself but what its very existence confesses: private credit has grown so large, so interconnected, and now so stressed that the market has concluded it needs — finally — a public, liquid, standardised mechanism to hedge against its unravelling.
According to S&P Dow Jones Indices, the new FINDX comprises 25 North American financial entities, including banks, insurers, real estate investment trusts, and business development companies (BDCs). Approximately 12% of the equally weighted index is tied to private credit fund managers — specifically Apollo Global Management, Ares Management, and Blackstone. The index rises in value as credit sentiment toward its constituent entities deteriorates. In practical terms: buy protection on FINDX, and you profit when the private credit ecosystem comes under pressure.
Nicholas Godec, head of fixed income tradables and commodities at S&P Dow Jones Indices, described the launch as “the first instance of CDS linked to BDCs, thereby providing CDS linked to the private credit market.” That phrasing — careful, bureaucratic, almost bloodless — belies the signal embedded in the timing.
The Numbers Behind the Anxiety
To understand why this product exists, you need to understand the scale and velocity of the stress currently moving through private credit. The numbers, as of Q1 2026, are striking.
The Financial Times reported that U.S. private credit fund investors submitted a total of $20.8 billion in redemption requests in the first quarter alone — roughly 7% of the approximately $300 billion in assets held by the relevant non-traded BDC vehicles. This is not a trickle. Carlyle’s flagship Tactical Private Credit Fund (CTAC) received redemption requests equivalent to 15.7% of its assets in Q1, more than three times its 5% quarterly limit. Carlyle, like many of its peers, honoured only the cap and deferred the rest. Blue Owl’s Credit Income Corp saw shareholders request withdrawals equivalent to 21.9% of its shares in the three months to March 31 — an extraordinary figure that prompted Moody’s to revise its outlook on the fund from stable to negative. Blue Owl, Blackstone, KKR, Apollo, and Ares have all faced redemption queues this cycle.
Moody’s has since downgraded its outlook on the entire U.S. BDC sector from “stable” to “negative” — a formal acknowledgement that what was once a bull-market darling is now contending with structural liquidity stresses that its semi-liquid product architecture was never fully designed to survive.
Meanwhile, the credit quality of the underlying loans is deteriorating in ways that the sector’s historical marketing materials simply did not anticipate. UBS strategists have projected that private credit default rates could rise by as much as 3 percentage points in 2026, far outpacing the expected 1-percentage-point rise in leveraged loans and high-yield bonds. Morgan Stanley has warned that direct lending default rates could surge as high as 8%, compared with a historical average of 2–2.5%. Payment-in-kind loans — where borrowers pay interest in additional debt rather than cash — are rising, a classic signal of borrowers under duress who are conserving liquidity at the expense of lender economics.
Perhaps most damning: in late 2025, BlackRock’s TCP Capital Corp reported that writedowns on certain portfolio loans reduced its net asset value by 19% in a single quarter.
The AI Dislocation: A Crisis Within the Crisis
No serious analysis of this stress cycle can ignore the role of artificial intelligence in accelerating it. Roughly 20% of BDC portfolio exposure, according to Jefferies research, is concentrated in software businesses — predominantly SaaS companies that private credit firms financed at generous valuations during the zero-interest-rate boom years. The rapid advance of AI tools capable of automating software workflows has sparked a brutal re-evaluation of those companies’ competitive moats, revenue durability, and, ultimately, their debt-service capacity.
Blue Owl, one of the largest direct lenders to the tech-software sector, has faced redemption requests that are — in the words of its own investor communications — reflective of “heightened negative sentiment towards direct lending” driven in part by AI-sector uncertainty. The irony is profound: private credit funds that rushed to finance the digital economy are now discovering that the same technological disruption they helped capitalise is undermining the creditworthiness of their borrowers.
This is not a transient sentiment shock. According to Man Group’s private credit team, private credit loans are originated with the “express purpose of being held to maturity.” That structural illiquidity — the attribute that was once marketed as a yield premium — is now the attribute that makes the sector’s stress harder to contain. When your borrowers are software companies facing existential competitive threats and your investors are retail wealth clients who were sold on liquidity promises, the collision produces exactly what we are now observing: gating, deferred redemptions, and a derivatives market emerging to price what the underlying funds cannot.
What Wall Street Is Really Saying
The CDX Financials launch is not merely a new product. It is a confession.
When the Wall Street Journal first reported the index’s development, analysts initially framed it as a neutral hedging tool — a risk management mechanism that sophisticated market participants had long wanted access to. And in the narrow technical sense, that framing is accurate. Hedge funds with concentrated exposure to BDC equity positions, pension funds with indirect private credit allocations, and banks with syndicated loan books have legitimate demand for an instrument that allows them to offset their exposure.
But consider the posture this represents. JPMorgan, Goldman Sachs, Morgan Stanley, and Barclays built, distributed, and marketed private credit products to institutional and retail clients throughout the 2015–2024 expansion. They collected billions in fees doing so. They celebrated the asset class’s growth — the private credit market has expanded to more than $3 trillion in AUM — as evidence of financial innovation serving real-economy borrowers who couldn’t access public markets. Those same institutions have now co-created a benchmark instrument whose primary utility is to profit, or hedge risk, when that market contracts.
This is not cynicism — it is rational risk management. But it is also a market signal of extraordinary clarity: the largest, best-informed participants in global credit markets have concluded that the probability-weighted downside in private credit is now large enough to justify the cost and complexity of derivative infrastructure. You do not build a CDX index for a market in good health.
Regulatory Fault Lines and the Retail Investor Problem
Perhaps the most underappreciated dimension of this crisis is distributional. Private credit’s expansion over the last decade was partly funded by a deliberate push by asset managers into the wealth management channel — retail and high-net-worth investors who were attracted by the yield premium over public credit and the low apparent volatility of funds that mark their assets infrequently and to model rather than to market.
That low apparent volatility, as analysts at Robert A. Stanger & Co. have pointed out, was partly a function of the valuation methodology rather than the underlying risk. BDCs in the non-listed space can appear stable in their net asset values right up until the moment they are not — and the quarterly redemption gates now being enforced create a first-mover advantage for those who recognise the stress earliest. Institutional investors — the “small but wealthy group” who have been demanding exits — have done exactly that. Retail investors, who typically receive quarterly statements and rely on fund managers’ own assessments of value, are disproportionately likely to be last out.
The Securities and Exchange Commission has been examining BDC valuation practices and the structural question of whether semi-liquid products are appropriately matched to the liquidity expectations of retail investors. The CDX Financials launch materially increases the regulatory pressure surface. It is considerably harder to argue that private credit is a stable, low-volatility asset class suitable for retail distribution when the major banks are simultaneously selling derivatives that facilitate bearish bets on its constitutent managers.
The regulatory trajectory points toward tighter disclosure requirements on BDC valuation methodologies, stricter rules on redemption queue transparency, and potentially new suitability standards for the sale of semi-liquid alternatives to retail investors. None of these changes will arrive in time to protect those already queuing to exit.
The European and EM Dimension
The stress in U.S. private credit has a global undertow that commentary focused on Wall Street mechanics tends to underweight. European direct lenders — many of them subsidiaries or affiliates of the same U.S. managers now under pressure — have similarly expanded into software, healthcare services, and leveraged buyout financing across France, Germany, the Nordics, and the UK. The Bank for International Settlements has flagged the opacity and rapid growth of private credit in advanced economies as a potential systemic risk vector, precisely because the infrequent and model-dependent valuation of these assets makes cross-border contagion difficult to detect in real time.
Emerging market economies face a different but related challenge. Domestic sovereign and corporate borrowers who were priced out of traditional bank lending and public bond markets during periods of dollar strength and risk-off sentiment found private credit as an alternative source of capital. As U.S. private credit funds come under redemption pressure and face potential portfolio de-risking, the marginal withdrawal of credit availability to EM borrowers represents a secondary shock that will not appear in U.S. financial statistics but will very much appear in the economic data of the borrowing countries.
The CDX Financials, for now, is a North American product focused on North American entities. But if the private credit stress deepens, the transmission mechanism to European and EM markets will operate through the same channel it always does: abrupt, disorderly credit withdrawal by institutions that had presented themselves to borrowers as patient, relationship-oriented capital.
The 2026–2027 Outlook: Three Scenarios
Scenario one: Controlled decompression. The redemption pressure peaks in mid-2026 as Q1 earnings are digested, valuations are reset modestly, and AI sector concerns stabilise. The CDX Financials remains a niche hedging tool with modest trading volumes. Default rates rise but remain below 5%. Fund managers gradually improve their liquidity management frameworks, and the episode is remembered as a stress test that the sector passed — awkwardly, but passed.
Scenario two: Structural repricing. Default rates reach the 6–8% range forecast by Morgan Stanley. Fund managers are forced to sell assets to meet redemptions, creating mark-to-market pressure that triggers further investor withdrawals — a slow-motion version of the bank run dynamic. The CDX Financials becomes a liquid, actively traded instrument as hedge funds build short theses against specific managers. The SEC intervenes with new rules. The retail wealth channel for private credit permanently contracts, and the asset class re-professionalises toward institutional-only distribution.
Scenario three: Systemic cascade. A rapid confluence of AI-driven borrower defaults, leveraged BDC balance sheets, and sudden insurance company mark-to-market requirements — recall that insurers have become significant private credit allocators — creates a feedback loop that overwhelms the quarterly gate mechanisms. This scenario remains tail-risk rather than base case, but it is materially more probable today than it was eighteen months ago, and the CDX Financials market, whatever its current illiquidity, provides the mechanism through which this scenario’s probability will be priced in real time.
The Signal in the Noise
There is a temptation, in moments like this, to reach for the 2008 parallel — the credit-default swaps written on mortgage-backed securities, the opacity, the interconnection, the eventual reckoning. That parallel is not fully appropriate. Private credit, for all its stress, is not leveraged to the degree that pre-crisis structured finance was, and the counterparties on the other side of these loans are corporate borrowers rather than millions of individual homeowners facing income shocks. The system is not on the edge of a cliff.
But the more honest framing is this: private credit grew from approximately $500 billion to more than $3 trillion in a decade, fuelled by zero interest rates, a regulatory environment that pushed lending off bank balance sheets, and an institutional appetite for yield that sometimes outpaced rigour. It attracted retail investors on the promise of bond-like returns with equity-like stability. It financed technology businesses at valuations that assumed a competitive landscape that artificial intelligence is now radically disrupting. And it did all of this in a structure — the non-traded BDC, the evergreen fund — that made liquidity appear more plentiful than it was.
The CDX Financials is what happens when the market runs the numbers on all of that and concludes it wants an exit option. For investors still inside these funds, that signal deserves very careful attention.
Conclusion: What Sophisticated Investors Should Do Now
The launch of private credit derivatives is not, by itself, a crisis. It is a maturation — the belated arrival of price discovery infrastructure into a corner of credit markets that had, until now, avoided the bracing discipline of public market scrutiny. In that sense, the CDX Financials is a healthy development. Transparency, even painful transparency, is preferable to opacity.
But for investors with allocations to non-traded BDCs, evergreen private credit funds, or insurance products with significant private credit exposure, several questions now demand answers that fund managers may be reluctant to provide. What is the true liquidity profile of the underlying loan portfolio? What percentage of the portfolio is in payment-in-kind status? How much of the nominal NAV reflects model-based valuations that have not been stress-tested against the current AI-driven sector disruption? And — most importantly — what is the fund’s plan if redemption requests in Q2 and Q3 2026 do not moderate?
The banks selling CDX Financials protection have already decided how to answer those questions for their own books. Investors would do well to ask the same questions of their own.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
Analysis
Agency in the Age of AI: Why Human Initiative — Not Artificial Agents — Will Define the Next Decade
On February 15, 2026, Sam Altman posted two sentences to X that encapsulated a decade of Silicon Valley ambition in a single breath. OpenAI had acquired OpenClaw, an open-source AI agent framework that could autonomously browse, code, and execute complex multi-step tasks — and its creator, Peter Steinberger, was joining the company to “bring agents to everyone.” The deal was quiet by tech-acquisition standards. No press conference. No billion-dollar number dropped to gasps at a conference. Just a pair of tweets that, read carefully, amount to a civilizational declaration: the age of artificial agents — AI systems that act on your behalf, that do rather than merely say — has arrived.
The question no one in those tweets was asking is the one that ought to keep us up at night. Not what will AI agents do for us? But what will they do to us?
Agency in the age of AI is not, at its core, a technology question. It is a human one. And across law firms, accounting houses, actuarial desks, and the laptops of twenty-four-year-olds trying to build careers in knowledge work, the contours of that question are becoming impossible to ignore.
The Rise of Autonomous Agents — And the Hidden Cost to Human Agency
“Agentic AI” is the industry’s term of the moment, and it deserves a plain-language translation: these are AI systems that do not merely answer questions but complete tasks — booking travel, filing documents, auditing spreadsheets, drafting briefs, managing inboxes — with minimal human instruction and, in many configurations, minimal human oversight. OpenAI’s Frontier platform, launched in February 2026 and described as a home for “AI coworkers,” gives enterprises AI systems with shared context, persistent memory, and permissions to act inside live business workflows.
The promise is intoxicating. The average knowledge worker, Silicon Valley’s pitch goes, will soon command a small army of autonomous agents the way a senior partner commands junior associates. Scale your output. Compress your timelines. Democratize expertise.
What this narrative conspicuously omits is what happens to the junior associates.
The hidden cost of autonomous agents is not primarily economic, though the economic costs are real and arriving faster than most forecasts anticipated. It is something harder to quantify and easier to dismiss: the erosion of the conditions under which human agency develops, deepens, and compounds over a life. The young lawyer who never drafts her first clumsy brief. The accountant who never wrestles with his first gnarly audit. The actuary who never builds intuition through the friction of getting it wrong. Agency — the capacity to act, judge, and take meaningful initiative in the world — is not innate. It is cultivated. And the cultivation requires doing the hard, error-prone, occasionally humiliating work that AI agents are now absorbing at scale.
This is not a Luddite argument. It is a developmental one. And it is urgent.
Why Lawyers, Accountants, and Actuaries Are Questioning Their Futures
The conversation has broken into the open in the corridors of professional services with a candor that would have been unthinkable three years ago. Senior partners at major law firms will tell you, off the record, that they have paused or sharply curtailed junior associate hiring. The work that used to season young talent — contract review, discovery, due diligence — is being absorbed by AI agents with an efficiency that makes the economics of junior staffing almost impossible to justify.
The data corroborates what the corridors are whispering. Goldman Sachs Research reported in April 2026 that AI is erasing roughly 16,000 net U.S. jobs per month — approximately 25,000 displaced by AI substitution against 9,000 new positions created by AI augmentation. The occupations most exposed to substitution, Goldman’s economists found, include accountants and auditors, legal and administrative assistants, credit analysts, and telemarketers: precisely the entry-level and mid-career roles that have historically served as the scaffolding of professional development.
The generational impact is particularly sharp. Goldman Sachs found that unemployment among 20- to 30-year-olds in AI-exposed occupations has risen by nearly three percentage points since the start of 2025 — significantly higher than for older workers in the same fields. Entry-level hiring at the top fifteen technology companies fell 25 percent between 2023 and 2024, and continued declining through 2025. The AI-related share of layoffs discussed on S&P 500 earnings calls grew to just above 15 percent by late 2025, up sharply from the year prior.
The career advice for young professionals navigating the AI age in 2026 used to be: develop technical skills, stay adaptable, embrace tools. That advice, while still valid, has become insufficient. What young professionals now face is a more fundamental disruption: the removal of the proving grounds where professional judgment is forged. You cannot develop the discernment of a seasoned litigator if the briefs are always already written. You cannot build the instincts of a skilled auditor if the anomalies are always already flagged.
The global picture adds further texture. In Southeast Asia, AI agents replacing jobs in BPO (business process outsourcing) — a sector employing hundreds of millions across the Philippines, India, and Vietnam — are compressing opportunities for a generation that had, through those very jobs, entered the formal economy and begun building transferable skills. In sub-Saharan Africa, where formal professional employment is expanding and could absorb more talent, the risk is that AI-agent adoption by multinationals shortcircuits the very job categories through which that transition happens. The AI agents replacing lawyers accountants and junior professionals in New York and London do not stay politely within American and European borders.
Pew’s 2025–2026 Data: Americans Demand More Control Over AI
The public has registered its discomfort — clearly, consistently, and in terms that policymakers should find impossible to dismiss.
Pew Research Center’s June 2025 survey of 5,023 U.S. adults found that 50 percent say the increased use of AI in daily life makes them feel more concerned than excited — up from 37 percent in 2021. More than half of respondents (57 percent) rated the societal risks of AI as high, against just 25 percent who say the benefits are similarly high. Majorities reported pessimism about AI’s impact on human creativity (53 percent say it will worsen people’s ability to think creatively) and meaningful relationships (50 percent say it will worsen our capacity to form them).
These are not the views of technophobes. They are the views of citizens watching something happen to their world and struggling to articulate, against the momentum of trillion-dollar valuations and breathless press coverage, what exactly it is they are losing.
The Pew data on control is the most politically significant finding of recent years. Fifty-five percent of U.S. adults say they want more control over how AI is used in their own lives. Among AI experts themselves — people who have built careers in the field — the figure is 57 percent. The demand for human agency in the AI era is not a fringe sentiment or a technophobic reflex. It crosses partisan lines, educational levels, and even the expert-layperson divide. What is remarkable is how little the policy architecture of any major government has responded to it.
In Europe, the EU AI Act has established a framework, but its enforcement mechanisms remain nascent and its treatment of agentic systems is notably underdeveloped for a technology moving at this pace. In the United States, the legislative response has been fragmented, preempted by a political environment in which AI has become entangled with culture-war dynamics that obscure rather than illuminate the actual governance questions. In China, regulatory assertiveness on AI coexists with state-directed deployment that raises its own agency concerns — for the individual citizen, not the system.
The gap between what people want — more control, more say, more human agency in the AI era — and what institutions are delivering is widening. It is into this gap that the next generation of social innovators, philanthropists, and policymakers must step.
Philanthropy’s Critical Role in Shaping AI Guardrails and Opportunity
Here is where the story gets interesting — and where institutional funders, foundations, and philanthropic capital have a genuinely historic role to play that they have, with a handful of exceptions, yet to fully embrace.
The governance of AI — particularly of agentic AI systems acting autonomously in high-stakes domains — cannot be left to the companies building it, to legislators who struggle to define a “large language model” without staff assistance, or to the uncoordinated preferences of individual consumers. The OECD and the World Economic Forum have outlined frameworks, but frameworks without funding are architectural drawings without builders.
Philanthropy AI governance has become one of the most consequential and underfunded intersections in public life. The MacArthur Foundation, Ford Foundation, and a handful of tech-originated donors (Omidyar Network, Schmidt Futures) have begun investing in responsible AI research and policy. But the scale of investment remains dramatically misaligned with the scale of the disruption underway. According to the Brookings Institution, the communities most exposed to AI displacement — lower-income workers, first-generation professionals, workers in routine cognitive roles — are precisely those with the least access to reskilling resources, legal literacy about their rights, and political power to shape the governance conversation.
Philanthropic capital can address this at multiple levels. First, funding public dialogue: creating the forums, commissions, and civic processes through which communities can articulate what they want from AI and what they will not accept — the kind of deliberative democracy that corporate AI development timelines do not organically produce. Second, building ethical guardrails: supporting independent technical audits of AI agent systems, especially those deployed in high-stakes contexts like hiring, credit, legal aid, and healthcare. Third, investing aggressively in reskilling: not the corporate upskilling programs that optimize for the needs of existing employers, but the genuinely human-centered education investments that give people the capacity to navigate a changed economy on their own terms. Fourth, and most visibly, creating opportunity for young people — the generation that stands to be most directly affected by the removal of the proving grounds of professional learning.
The philanthropic AI governance opportunity is not about slowing innovation. It is about ensuring that the benefits of innovation are not captured exclusively by those who already own the infrastructure, while the costs — in disrupted careers, eroded agency, and stunted development — are borne by everyone else.
Reclaiming Agency: What Young People, Leaders, and Funders Must Do Now
The future of human agency in the AI era will not be decided in Palo Alto. It will be decided in classrooms, in courtrooms, in legislative chambers, in the board rooms of foundations, and in the daily choices of individuals about which tasks they hand to machines and which they insist on doing themselves — not because machines cannot do them, but because the doing is the point.
For young professionals — the generation navigating career advice in the AI age of 2026 — the imperative is not to compete with AI agents on their own terms. That is a race designed for machines. The imperative is to cultivate what agents cannot: moral judgment, relational intelligence, contextual wisdom, creative vision, the capacity to care about what you’re doing and why. These are not soft skills. They are the hardest skills. They compound over a lifetime in ways that no model weight or token count does. Protect your learning curve fiercely. Seek out the friction that develops judgment. Resist the temptation to outsource your thinking to systems that are, however impressive, fundamentally indifferent to your growth.
For leaders — in business, government, education, and civil society — the reclamation of agency requires building institutions that are honest about trade-offs. Does AI erode human agency? In its current deployment trajectory: yes, in specific and important ways. The right response is not panic, and it is not denial. It is design. Invest in human-AI collaboration frameworks that genuinely keep humans in the loop, not as a compliance formality but as a developmental reality. Design apprenticeship and mentorship structures that survive the automation of the tasks around which they were traditionally built. Insist on AI impact assessments before deploying agentic systems in professional and educational contexts. Make the question of human development central to every AI deployment decision, not an afterthought.
For funders: this is the decade. The governance architecture being built — or not built — around agentic AI will shape the relationship between human agency and technological systems for a generation. The window for influence is not permanently open. Foundations that move early, with real capital and genuine intellectual seriousness, can help write the rules. Foundations that wait will be left funding the repair.
The global dimension matters here, too. The most consequential AI governance battles of the next decade may not be fought in Washington or Brussels, but in the Global South — in countries where the intersection of demographic youth, expanding educational access, and AI-driven disruption of professional labor markets creates conditions for either extraordinary opportunity or extraordinary waste of human potential. Philanthropic AI governance that ignores Lagos, Jakarta, and São Paulo is not global governance. It is just wealthy-country governance wearing a global mask.
The story Silicon Valley is telling about the age of AI is seductive and, in many of its details, accurate. Autonomous agents will transform professional life. Productivity will rise. Some categories of work will disappear and others will emerge. The arc, the industry insists, bends toward abundance.
What the story omits is the quality of the lives lived along that arc. The lawyer who never argued. The accountant who never judged. The twenty-three-year-old who handed her first decade of professional development to a system that learned everything and taught her nothing.
Agency in the age of AI is not a footnote to the productivity story. It is the story that matters most.
Two tweets launched the age of agentic AI. What we do next — in philanthropy, in policy, in education, in the daily texture of our professional and personal choices — will determine whether this age expands or diminishes what it means to be a capable, purposeful human being.
The question is not what AI agents will do for us. The question is what kind of agents we will choose to become.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
-
Markets & Finance3 months agoTop 15 Stocks for Investment in 2026 in PSX: Your Complete Guide to Pakistan’s Best Investment Opportunities
-
Analysis2 months agoBrazil’s Rare Earth Race: US, EU, and China Compete for Critical Minerals as Tensions Rise
-
Analysis2 months agoTop 10 Stocks for Investment in PSX for Quick Returns in 2026
-
Banks3 months agoBest Investments in Pakistan 2026: Top 10 Low-Price Shares and Long-Term Picks for the PSX
-
Investment3 months agoTop 10 Mutual Fund Managers in Pakistan for Investment in 2026: A Comprehensive Guide for Optimal Returns
-
Global Economy4 months agoPakistan’s Export Goldmine: 10 Game-Changing Markets Where Pakistani Businesses Are Winning Big in 2025
-
Asia4 months agoChina’s 50% Domestic Equipment Rule: The Semiconductor Mandate Reshaping Global Tech
-
Global Economy4 months ago15 Most Lucrative Sectors for Investment in Pakistan: A 2025 Data-Driven Analysis
