Connect with us

AI

The Hidden Cost of AI ‘Workslop’: Why Professionals Are Creating It — and How Organisations Can Stop It

Published

on

On a frigid Tuesday morning in January, a senior product manager at a Fortune 500 technology company opened what appeared to be a thoughtful three-page strategy memo from her colleague. The formatting was impeccable. The executive summary promised “actionable insights.” But as she read deeper, something felt wrong. The prose was oddly verbose yet strangely hollow—sentences that said everything and nothing simultaneously. Bullet points proliferated without prioritisation. Key decisions were buried in passive constructions. By the third paragraph, she recognised the telltale signs: this was AI-generated work, polished just enough to seem legitimate, but fundamentally empty.

She’d just encountered workslop.

Welcome to 2026’s defining workplace problem—one that paradoxically intensifies even as organisations invest billions in generative AI to boost productivity. While executives herald artificial intelligence as the great accelerator of knowledge work, something darker is emerging from the spreadsheets: a flood of low-quality AI generated content that masquerades as professional output while offloading cognitive labour onto everyone else.

What Is AI Workslop—and Why Should Leaders Care?

The term “workslop,” coined by researchers at Stanford University and BetterUp in 2025, describes AI-generated workplace content that meets minimum formatting standards but lacks substance, clarity, or genuine insight. Think of it as the professional equivalent of content farm articles: superficially plausible, fundamentally worthless, and designed more to signal effort than to communicate ideas.

Workslop AI manifests across every digital workplace surface. That rambling email that could’ve been two sentences. The slide deck with stock phrases like “synergistic opportunities” and “strategic imperatives” but no actual strategy. The meeting summary that somehow requires three pages to convey what everyone already discussed. The report that reads like a thesaurus exploded onto a template.

Unlike obviously bad writing, workslop is insidious precisely because it appears acceptable at first glance. It has proper grammar, professional vocabulary, formatted headers. It follows templates. But consuming it—trying to extract actual meaning—becomes exhausting cognitive work that the creator has outsourced to the reader.

According to research published in Harvard Business Review in January 2026, the average knowledge worker now encounters workslop in roughly 35% of internal communications, up from virtually zero two years ago. More alarmingly, the same research found that processing workslop consumes approximately four hours per week of professional time—time spent deciphering, clarifying, and essentially doing the cognitive work the original creator avoided.

The math is brutal. For a 1,000-person organisation where the average employee earns $80,000 annually, that’s approximately $9.2 million in annual productivity loss. And that’s the conservative estimate, accounting only for direct time costs. It excludes strategic errors from misunderstood communications, damaged professional relationships, and the slow erosion of organisational trust.

The Generative AI Productivity Paradox Takes Shape

Here’s the uncomfortable truth: we’re witnessing a generative AI productivity paradox.

Organisations have embraced AI tools at unprecedented speed. Forbes reported in late 2025 that 78% of Fortune 1000 companies now provide employees with access to ChatGPT, Claude, or similar platforms. Microsoft Copilot has penetrated 65% of enterprise customers. The promise seemed obvious: automate routine communications, accelerate document creation, amplify individual productivity.

Yet productivity gains remain stubbornly elusive. Research from the National Bureau of Economic Research found that while individuals using AI tools report feeling more productive, their colleagues frequently report the opposite—spending more time on email, meetings, and clarifications. The pattern emerging is stark: AI doesn’t eliminate work; it redistributes it, often unfairly.

When one person uses AI to generate a meandering three-page email in 30 seconds, they’ve saved themselves time. But if that email requires five recipients to spend 10 minutes each deciphering it, the organisation has lost 50 minutes to save one person half a minute of careful writing. It’s productivity theatre masquerading as innovation.

“We’re creating a tragedy of the commons in corporate communications,” explains Dr. Sarah Chen, an organisational psychologist who studies technology adoption. “Every individual has an incentive to use AI to reduce their own cognitive load, but when everyone does it simultaneously, the collective burden actually increases.”

Why Intelligent Professionals Create Workslop: The Psychology of Cognitive Offloading

Understanding how to avoid AI workslop begins with understanding why people create it—and the answer is more nuanced than simple laziness.

The Seduction of Effortless Output

Generative AI tools offer something intoxicating to overwhelmed knowledge workers: instant competence. Faced with a blank screen and a looming deadline, the ability to summon 500 professionally formatted words with a single prompt feels like magic. The cognitive relief is immediate and powerful.

Neuroscience research shows that our brains are wired to take the path of least resistance. When AI offers to handle the “tedious” work of structuring arguments, finding synonyms, or expanding bullet points into paragraphs, declining feels almost irrational. Why struggle with phrasing when the machine can do it instantly?

But here’s what’s lost in that exchange: the struggle is the work. Transforming vague thoughts into precise language forces clarity. Wrestling with how to structure an argument reveals which ideas actually matter. The friction of writing is where understanding happens. When we outsource that friction to AI, we outsource the thinking itself.

Performance Pressure and the AI Arms Race

Many professionals create AI slop workplace content not from laziness but from fear.

In organisations where colleagues are using AI, abstaining feels like unilateral disarmament. If your peer can produce a 20-slide deck in an hour while you’re still outlining yours, are you falling behind? If the team expects rapid-fire email responses and AI makes that possible, can you afford to slow down and craft thoughtful replies?

This dynamic creates a vicious cycle. As The Washington Post reported, many professionals describe feeling “obligated” to use AI tools even when they suspect the output is inferior. The perception that everyone else is using AI—whether accurate or not—becomes self-fulfilling.

“I know my AI-generated status reports aren’t as clear as what I used to write by hand,” admitted one consultant who spoke on condition of anonymity. “But leadership expects them weekly now instead of monthly, and I simply don’t have time to write four thoughtful reports a month. So I prompt, I polish for ten minutes, and I send. I hate that my name is on something mediocre, but what choice do I have?”

Organisational Incentives That Reward Volume Over Value

The workslop epidemic isn’t solely a people problem—it’s a systems problem.

Many organisations have inadvertently created incentive structures that reward the appearance of productivity over actual value creation. When success metrics emphasise deliverables completed, emails sent, or reports filed rather than decisions improved or problems solved, AI becomes an enabler of performative work.

Consider the phenomenon of “AI mandates without guidance.” CNBC documented how several major corporations have encouraged or even required employees to use generative AI tools—framed as “staying competitive” or “embracing innovation”—without providing clear frameworks for appropriate use. The message employees receive is essentially: use AI more, but we won’t tell you when or how.

The result is predictable. If using AI is valorised regardless of outcome, and quality is difficult to measure, employees will use AI for everything. Quantity becomes the proxy for competence.

Tool Design Flaws: When AI Makes Slop Too Easy

Finally, we must acknowledge that current generative AI tools are almost designed to produce workslop.

Most AI assistants operate on a principle of prolixity—when uncertain, they add words. A single sentence of input can yield paragraphs of output, all grammatically correct, much of it filler. The tools don’t naturally distinguish between situations requiring depth and those requiring brevity. They don’t ask, “Is this the right medium for this message?” or “Have I actually said anything meaningful?”

Moreover, the friction required to create workslop is near-zero, while the friction required to create something genuinely good remains high. Generating mediocre content takes one prompt. Creating exceptional content still requires human judgment, iteration, editing—the very work AI was supposed to eliminate.

Until tool designers build in more friction for low-value outputs or more support for high-value thinking, the path of least resistance will continue producing slop.

The Real Cost: Why AI Reduces Productivity Despite Individual Gains

The damage from AI workslop extends far beyond wasted time.

The Productivity Tax Compounds

Research from Axios and workplace analytics firm ActivTrak found that processing low-quality AI content doesn’t just consume time—it fragments attention and depletes decision-making capacity.

When professionals encounter workslop, they face a choice: invest energy trying to extract meaning, or request clarification (which creates more work for everyone). Either option imposes costs. The first depletes cognitive resources needed for strategic work. The second generates additional communication overhead and delays.

Over time, these micro-costs accumulate into macro-dysfunction. Teams spend more time in “alignment meetings” because written communications no longer align anyone. Projects stall because requirements documents are simultaneously verbose and vague. Strategic initiatives falter because the business case was generated rather than reasoned.

“We’re seeing organisations where 60% of email volume is essentially noise,” notes Michael Torres, a management consultant who advises on digital workplace practices. “People have started assuming that anything longer than three paragraphs can be safely ignored, which means genuinely important communications are now getting buried alongside the slop.”

Trust Erosion in Professional Relationships

Perhaps more corrosive than the time cost is the damage to professional credibility and trust.

When colleagues recognise that someone is routinely submitting AI-generated work with minimal thought, respect diminishes. The implicit message is clear: “I don’t value your time enough to think carefully before communicating with you.” Over time, this erodes the social capital required for effective collaboration.

Several organisations interviewed for this article reported a concerning trend: professionals increasingly ignore communications from colleagues known to produce workslop. One executive described creating an informal “filter list” of people whose emails he automatically skims for essential information while disregarding analysis or recommendations.

“It’s a tragedy,” he acknowledged. “Some of these are talented people. But I’ve learned that their AI-generated memos are unreliable, so I just extract the data and ignore their conclusions. That’s probably causing me to miss good ideas, but I don’t have time to sift through the filler.”

This dynamic is particularly damaging for early-career professionals who haven’t yet established reputations. When senior leaders encounter workslop from junior team members, they form lasting impressions about competence and judgment—impressions that may be undeserved but difficult to reverse.

Decision-Making Degradation

Most dangerous is workslop’s impact on organisational decision-making.

AI-generated work problems often hide in the space between what’s written and what’s meant. A strategy recommendation might sound plausible but rest on flawed assumptions the AI didn’t understand. A risk assessment might list generic concerns without identifying the actual specific vulnerabilities. A project post-mortem might catalogue events without extracting lessons.

When leaders make decisions based on AI-generated analysis they assume was human-reasoned, they’re building on potentially unstable foundations. Several executives described situations where strategic decisions were made based on compelling-sounding recommendations, only to discover later that the underlying analysis was superficial—the product of AI summarising publicly available information rather than domain expertise.

“We nearly acquired the wrong company because the due diligence memo was beautifully formatted nonsense,” confided one private equity principal. “The analyst had used AI to expand his notes into a full report, but the AI didn’t understand our investment thesis. We only caught it when someone noticed a logical inconsistency buried in paragraph fourteen.”

Workslop in the Wild: Real-World Examples Across Sectors

To understand the phenomenon’s pervasiveness, consider these anonymised examples from different industries:

Technology sector: A product team at a major software company implemented a policy requiring weekly written updates. Within a month, these updates—once concise and insightful—had bloated to multi-page documents filled with phrases like “optimising for synergistic outcomes” and “leveraging agile methodologies to drive stakeholder value.” Product managers were spending 90 minutes weekly generating these reports and roughly the same reading everyone else’s. Actual status could have been communicated in a 5-minute standup.

Professional services: At a global consulting firm, junior consultants began using AI to draft client deliverables, then having senior partners review and approve. Partners initially appreciated the time savings—until clients started providing feedback that reports were “generic” and “lacking industry insight.” The firm’s differentiation had always been deep contextual understanding; AI was systematically stripping that away. Client renewals declined 12% year-over-year.

Financial services: A European investment bank encouraged traders and analysts to use AI for market commentary and research notes. Within weeks, recipients were complaining that the analysis had become “undifferentiated” and “obvious.” The AI could summarise public information beautifully but couldn’t offer the proprietary insights that justified premium fees. The bank quietly reversed its AI encouragement policy.

Government/public sector: A national regulatory agency (outside the US) began using AI to draft policy guidance documents. The resulting materials were so dense and jargon-heavy that compliance officers reported spending more time interpreting the guidance than they would have under the previous, simpler system. What was intended to accelerate regulatory clarity instead created confusion.

These aren’t isolated incidents. They represent a pattern: organisations adopting AI for efficiency gains, initially seeing positive signals, then discovering that quality degradation imposes costs that eventually exceed the efficiency benefits.

How Organisations Can Stop the Workslop Epidemic: Evidence-Based Solutions

Addressing workslop requires interventions at multiple levels: cultural, structural, and technological. Leading organisations are pioneering approaches that preserve AI’s benefits while preventing its misuse.

1. Establish Clear Guidelines for Appropriate AI Use

The most effective organisations don’t ban AI—they define when and how it should be used.

Financial Times documented how several European firms have implemented “traffic light” frameworks:

  • Green (encouraged): Using AI for initial research, brainstorming, formatting assistance, grammar checking, translation
  • Yellow (use with caution): Drafting external communications, summarising complex documents, creating templates
  • Red (prohibited or requires disclosure): Final client deliverables without human verification, strategic recommendations, performance reviews, legal documents

The key is specificity. Generic guidance like “use AI responsibly” proves meaningless in practice. Concrete rules—”all client-facing documents must be reviewed and edited by a human, with AI assistance disclosed if substantial”—provide actionable boundaries.

2. Train for Human-in-the-Loop Best Practices

Simply providing AI tools without training is like distributing scalpels without medical school. Leading organisations are investing in structured training programmes that teach effective AI collaboration.

These programmes emphasise several principles:

  • Use AI as a thought partner, not a ghostwriter: Engage AI in dialogue to refine your thinking, then write the final version yourself
  • Never send AI-generated content without substantial editing: If you can’t improve the AI’s output meaningfully, you probably don’t understand the topic well enough
  • Apply the “telephone test”: If you couldn’t explain the content verbally with the same clarity, don’t send the written version
  • Favour brevity over AI-generated expansion: If AI suggests adding paragraphs to your bullet points, resist unless each addition adds genuine value

Some organisations have implemented “AI literacy” certification programmes, similar to data security training, ensuring all employees understand both capabilities and limitations.

3. Redesign Incentives to Reward Quality Over Quantity

Stopping workslop ultimately requires addressing the organisational conditions that incentivise it.

Progressive firms are shifting metrics:

  • Instead of tracking “reports completed,” measure “decisions improved” or “clarity ratings” from recipients
  • Replace requirements for lengthy updates with brief, structured formats (Amazon’s famous six-page memos, but actually written by humans)
  • Implement 360-degree feedback that specifically assesses communication quality and efficiency
  • Recognise and reward professionals who communicate effectively with fewer, better-crafted messages

One technology company experimented with a provocative policy: any email longer than 200 words required VP approval. While ultimately too restrictive, the initial trial dramatically reduced communication volume and improved clarity. The modified version—any email over 200 words must include a three-sentence summary at the top—proved sustainable.

4. Build Technical Controls and Transparency

Some organisations are implementing technical measures to create accountability:

  • Watermarking or disclosure requirements: Some enterprise AI tools now include metadata indicating AI involvement, allowing recipients to calibrate expectations
  • Usage monitoring: Analytics that identify individuals generating unusually high volumes of AI content, triggering coaching conversations
  • Quality checking tools: AI-powered systems that ironically detect AI-generated content and flag it for human review before sending

While these approaches raise legitimate privacy concerns and shouldn’t become surveillance systems, transparent implementation can help organisations understand usage patterns and identify where intervention is needed.

5. Model Alternative Behaviour from Leadership

Perhaps most critically, senior leaders must demonstrate that thoughtful, concise human communication is valued and rewarded.

When executives send brief, carefully considered emails rather than AI-generated essays, they signal priorities. When leaders openly discuss their AI use—”I used ChatGPT to research this topic, then wrote this analysis based on what I learned”—they model appropriate transparency. When promotions go to people who communicate with clarity rather than volume, the message resonates.

“I started ending important emails with a note: ‘This email was written by me without AI assistance because this decision matters,'” shared one CFO. “It sounds almost comical, but the feedback was overwhelmingly positive. People told me they noticed the difference and appreciated the care.”

The Path Forward: Will Workslop Fade or Persist?

Looking ahead, several scenarios could unfold.

The optimistic view suggests that workslop represents growing pains—an inevitable phase as organisations learn to integrate powerful new tools. As AI literacy improves, social norms against slop solidify, and tools become more sophisticated at generating genuinely useful content, the problem may naturally recede.

Some evidence supports this optimism. The Economist noted in late 2025 that organisations in their second or third year of widespread AI adoption show better usage patterns than those in their first year. Cultures develop antibodies. People learn what works and what doesn’t.

The pessimistic view holds that workslop may be symptomatic of deeper limitations in how we’re deploying generative AI. If the fundamental value proposition is “create more content with less effort,” we shouldn’t be surprised when people create more low-value content. The problem isn’t user education—it’s the mismatch between the tool’s capabilities and the actual needs of knowledge work.

This perspective suggests we need different tools entirely. Rather than AI that helps you write more, perhaps we need AI that helps you think more clearly, summarise more concisely, or communicate more precisely. Tools designed for quality rather than quantity.

The likely reality probably lies between these poles. Workslop won’t disappear entirely—it’s too easy to create and too tempting under pressure. But organisations that take it seriously as a cultural and operational challenge can substantially mitigate it. Those that don’t will find themselves drowning in a flood of plausible-sounding nonsense, watching productivity gains evaporate despite significant AI investment.

The broader question is whether the current generation of generative AI tools will prove to be genuinely transformative for knowledge work or merely another technology that seems revolutionary until organisations discover its hidden costs. Workslop may be our first clear signal that the answer is more complicated than the hype suggested.

Conclusion: Choose Clarity Over Convenience

Two years into the generative AI revolution, we’re learning an uncomfortable truth: tools that make it easier to create content don’t automatically make communication more effective. Sometimes, they make it worse.

The solution isn’t to reject AI—the technology offers genuine value when deployed thoughtfully. But we must resist the siren call of effortless output and recognise that good communication, like good thinking, requires effort. There are no shortcuts to clarity.

For leaders, the imperative is clear: establish guardrails, model best practices, and redesign systems that inadvertently reward slop. Create cultures where concision is prized and where the quality of thinking matters more than the volume of deliverables.

For individual professionals, the choice is equally stark: you can either do the cognitive work yourself and build a reputation for clear thinking, or you can outsource that work to AI and accept the professional consequences. Your colleagues will notice the difference, even if they don’t say so.

The hidden cost of AI workslop isn’t just measured in dollars or hours. It’s measured in degraded decision-making, eroded trust, and the slow corrosion of professional standards. We’re at a fork in the road: one path leads toward more thoughtful integration of AI that amplifies human judgment; the other leads toward increasingly automated mediocrity.

Which path your organisation takes isn’t determined by technology. It’s determined by choices—about what you value, what you reward, and what you’re willing to tolerate.

Choose carefully. The clarity of your communications may determine the quality of your future.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading
Click to comment

Leave a Reply

AI

Could AI’s Leading Men Become as Powerful as Ford or Rockefeller? For Now, They Are Still a Long Way Behind.

Published

on

The five men reshaping intelligence — Dario Amodei, Demis Hassabis, Elon Musk, Mark Zuckerberg, and Sam Altman — command wealth, attention, and technological leverage that no previous generation of innovators has enjoyed. Yet the distance between their present dominance and the systemic, civilization-bending grip once exercised by John D. Rockefeller or Henry Ford remains vast — and poorly understood.

Imagine a boardroom meeting in 2035. The agenda is simple: who controls the infrastructure of thought itself? A decade earlier, five men launched what many called the most consequential technological disruption since electricity. By 2026, their companies had collectively captured trillions of dollars in market value, reshaped labor markets across three continents, and triggered geopolitical confrontations from Brussels to Beijing. And yet, if you measure their power by the standards history reserves for its true industrial titans — the men who didn’t just build industries but became them — the five AI leading men of our era still have a very long way to go.

That is not a comfortable argument to make. The numbers alone seem to render it absurd. Elon Musk’s net worth now exceeds $811 billion, a figure that surpasses the GDP of Poland. Musk’s February 2026 all-stock merger of SpaceX and xAI created a combined entity valued at $1.25 trillion — a single transaction larger than the entire U.S. defense budget. OpenAI, now valued at approximately $500 billion, counts some 800 million weekly active users of ChatGPT, a number that would have seemed science fiction five years ago. Anthropic — founded by Dario Amodei and his sister Daniela — reached a valuation of $380 billion in early 2026, while Meta has committed to spending $115 to $135 billion in capital expenditure in 2026 alone, with an astonishing $600 billion pledged toward data centers through 2028.

These are not ordinary fortunes. They are structurally new categories of wealth concentration. And still, the Rockefeller comparison fails — and fails instructively.

What Made a Tycoon a Tycoon: The Three Pillars of Historical Power

To understand why AI tycoons remain a long way behind their Gilded Age predecessors, one must first understand what actually made Rockefeller and Ford so uniquely dangerous to the social order of their time. It was not simply their wealth. Adjusted for GDP, Rockefeller’s peak fortune has been estimated at roughly $400 billion in today’s dollars — comfortably surpassed by Musk. What made Standard Oil a civilizational force was something more specific and more structural: the simultaneous control of physical infrastructure, political capture, and cultural monopoly.

Rockefeller didn’t just refine oil; he controlled approximately 91% of United States oil refining capacity by the mid-1880s through ownership of the pipelines, the railroad rebates, and the pricing mechanisms that every competitor had to use to survive. He didn’t lobby Congress — he owned the conversation. Ford, similarly, didn’t just manufacture cars; he built company towns, set wages for an entire economy, and deployed a private security apparatus — the Ford Service Department — to enforce his will on a captive workforce. Both men bent the physical world to their models in ways that left no exit for competitors, workers, or governments.

That is the three-pillar framework that the AI quintet has not yet replicated: physical infrastructure lock-in, political capture, and cultural monopoly. The gap between aspiration and achievement on each of these dimensions is the real story of power in 2026.

Infrastructure: Who Controls the Pipes?

The most important question in any era of technological transformation is not who builds the smartest machine, but who controls the plumbing. Rockefeller’s genius was not chemistry — it was logistics. He understood that the pipeline was more powerful than the refinery.

In the AI economy, the equivalent of the pipeline is the data center, the chip, and the undersea cable. Here the picture for the quintet is mixed at best. Mark Zuckerberg’s Meta is building on the most ambitious scale — two mega-clusters that dwarf any corporate construction project in a generation — but the silicon in those data centers is manufactured almost entirely by NVIDIA, a company none of the five control. Musk’s SpaceX-xAI merger is the most vertically integrated attempt to replicate Rockefeller’s pipeline logic: orbital data centers fed by Starlink satellites, in theory giving xAI the physical substrate to train and deploy models without dependence on third-party cloud providers. But as of 2026, that vision remains largely prospective. xAI’s Grok competes credibly against ChatGPT and Claude, but it does not yet possess the proprietary infrastructure advantage that would make it structurally inescapable.

Sam Altman, for his part, has no direct equity in OpenAI, earning a nominal salary of roughly $65,000 per year. His influence derives almost entirely from his position at the helm of the world’s most recognizable AI brand — a form of power that is real, but brittle. The moment a better or cheaper model displaces GPT, the institutional moat begins to crack. Rockefeller, by contrast, had no such vulnerability: he owned the pipes regardless of whose oil flowed through them.

Dario Amodei’s Anthropic presents a different case. With a $380 billion valuation, enterprise AI revenues reportedly growing at exponential rates, and a model — Claude — that has captured an estimated 40% of enterprise large language model spending in the United States, Anthropic is the most quietly formidable player in the quintet. Amodei has also demonstrated a rare form of institutional courage: in February 2026, he refused a Pentagon demand to remove contractual prohibitions on Claude’s use for mass domestic surveillance, even as the Trump administration labeled Anthropic a “supply-chain risk” and ordered agencies to stop using the model. That is not the behavior of a man who has captured the state. It is the behavior of a man trying not to be captured by it.

Political Power: Proximity Is Not Capture

The AI leading men have achieved unprecedented proximity to political power. Altman donated to Trump’s inaugural fund, sat on San Francisco’s mayoral transition team, and has testified repeatedly before Congress. Musk, as an architect of the Department of Government Efficiency, has arguably achieved more direct influence over federal bureaucracy than any private citizen since Bernard Baruch. Zuckerberg has reoriented Meta’s content moderation in ways that reflect political calculation as much as principled policy.

And yet proximity is not capture. Rockefeller’s Standard Oil didn’t merely lobby regulators — it effectively set the regulatory agenda in oil-producing states for two decades. The steel and railroad barons didn’t just meet with senators; they funded them in ways that made legislative independence a legal fiction.

Today’s AI executives remain subject to forces their predecessors never faced. The European Union’s AI Act imposes binding constraints that no 19th-century robber baron ever encountered. Antitrust scrutiny from both the Department of Justice and the EU threatens the integration strategies of both Google DeepMind and Meta. Anthropic’s standoff with the Pentagon demonstrates that even the most safety-focused AI lab cannot escape the gravitational pull of geopolitical competition. The five men are powerful political actors — but they are actors on a stage with many more directors than Rockefeller ever faced.

The Cognition Economy: A New Kind of Monopoly Risk

Where the AI quintet is converging toward something genuinely Rockefellerian is in what might be called the cognition economy — the emerging marketplace where intelligence itself, not oil or steel, is the resource being extracted, refined, and sold.

Demis Hassabis, the Nobel Prize–winning CEO of Google DeepMind, said at Davos 2026 that today’s AI systems are “nowhere near” human-level AGI, placing the milestone at “five to ten years” away. Amodei, characteristically more bullish, has predicted that AI will reach “Nobel-level” scientific research capability within two years, and has described the coming AI cluster as “a country of geniuses in a data center” running at superhuman speeds. If either is even partially correct, the downstream consequences for labor markets, knowledge production, and institutional power are more profound than anything the Industrial Revolution generated.

The danger is not that one of these five men will own the world’s intelligence outright. It is that the economic logic of AI — massive upfront compute costs, proprietary training data, and compounding capability advantages — tends toward the same concentration dynamics that produced Standard Oil. A model that is marginally better attracts more users; more users generate more data; more data enables further improvement; the loop closes. This is not metaphor. Meta’s Llama 5, released in April 2026, was explicitly designed to commoditize proprietary AI — Zuckerberg’s theory being that if intelligence becomes free, the company that distributes it through 3.5 billion social media users wins by default. That is not so different from Rockefeller’s insight that the real money was never in the oil itself, but in making yourself indispensable to everyone who wanted to transport it.

Cultural Monopoly: The Unfinished Frontier

Henry Ford didn’t just build cars. He built a culture. The five-dollar day, the $40 workweek — Ford shaped how Americans understood the relationship between labor, leisure, and consumption. His prejudices, published in the Dearborn Independent and later praised by Adolf Hitler, exercised a cultural influence that no modern tech executive has approached, for better or for worse.

The AI quintet has, so far, produced nothing comparable to that kind of cultural ownership. ChatGPT is used by hundreds of millions, but it has not yet redefined the terms of civic life in the way that Ford’s assembly lines redefined time itself. The AI leading men give TED talks and publish essays — Amodei’s “Machines of Loving Grace” and its sequel “The Adolescence of Technology” are genuine intellectual contributions — but they have not yet built the durable cultural institutions that the Carnegies and Fords used to launder their economic power into social legitimacy. The Carnegie libraries are still standing. The Ford Foundation still funds democracy initiatives. What will Sam Altman’s equivalent be? We do not yet know.

This gap may close faster than we expect. If AI agents do begin displacing 50% of white-collar jobs — as Amodei and others predict within five years — the resulting social disruption will demand new cultural narratives. The men who shape those narratives will wield a form of power that makes their current wealth look like a down payment.

Why the Gap Matters — And Why It Is Narrowing

The distance between the AI tycoons of 2026 and the historical robber barons is real, but it is not permanent. Three trends are accelerating the convergence.

First, physical infrastructure is being built at unprecedented speed. Meta’s $600 billion data center pledge, Musk’s orbital computing vision, and the arms-race dynamics of semiconductor procurement are creating the structural lock-in that historically defines industrial monopoly. The company that owns the compute wins — not just the model race, but the infrastructure race.

Second, regulatory arbitrage is becoming a competitive strategy. Just as Rockefeller used the legal patchwork of late-19th-century interstate commerce to outmaneuver state-level regulators, AI companies are exploiting the gap between national regulatory frameworks to deploy capabilities that no single jurisdiction can constrain. The Trump administration’s rollback of Biden-era AI safety executive orders has already opened space for more aggressive deployment by American companies.

Third, the feedback loops of AI capability are compounding in ways that no previous technology has. When Anthropic’s own engineers have largely stopped writing code themselves — directing AI-generated code as product managers rather than authors — the productivity advantages of leading AI labs over their competitors begin to resemble Standard Oil’s pipeline advantages over independent refiners. Not yet identical. But structurally rhyming.

The View from 2035: A Question of Institutions

The most important distinction between Ford, Rockefeller, and today’s AI leading men may ultimately be institutional rather than technological. The Gilded Age tycoons operated in a world with weak antitrust frameworks, no administrative state to speak of, and a political economy that had not yet developed the tools to constrain concentrated private power. The Progressive Era — Teddy Roosevelt’s trust-busting, the Sherman Act, the eventual dissolution of Standard Oil — was the institutional response. It took a generation.

We may be at the beginning of a similar reckoning. Whether the five men who currently lead the AI revolution become as powerful as Ford or Rockefeller depends less on their own ambitions — which are extraordinary — than on the speed and coherence of the institutional response. Policymakers who wait for the infrastructure to be fully built before acting will find themselves in the same position as the regulators who confronted Standard Oil in 1911: arriving at the scene of a revolution already completed.

The AI leading men are not, today, as powerful as Rockefeller. But they are building the conditions under which someone very like them could be. That is the moment for executives, investors, and policymakers to pay attention — not when the resemblance is complete, but now, while the architecture is still under construction and the pipes have not yet been welded shut.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

AI

The Mythos Meeting: Anthropic’s Dangerous AI and the White House’s Calculated Gamble | 2026

Published

on

The Amodei–Wiles meeting signals a seismic U.S. AI policy pivot. Why Washington is now courting the Anthropic Mythos model it once tried to destroy.

Imagine the scene: a Friday afternoon in the West Wing, the air carrying the particular weight of decisions that cannot be undecided. Dario Amodei, the quietly intense CEO of Anthropic, sits across from Susie Wiles, the White House Chief of Staff whose political instincts are said to be the closest thing to a gyroscope this administration possesses. Between them, unspoken but omnipresent, is a question that has convulsed Washington’s national-security establishment for weeks: what do you do with an AI so dangerous that even its creators are frightened of it—and so potent that refusing to use it might be the most reckless choice of all?

That meeting, confirmed by Axios, CNN, and the Associated Press, is not merely a diplomatic thaw between a tech company and its government tormentor. It is the moment Washington finally admitted what it has known all along: that frontier AI has outrun every framework, every regulation, and every posture of ideological hostility that American politics could muster. The implications—for U.S. national security, for the global AI arms race, and for the governance of technology at civilizational scale—are seismic.

What Mythos Is, and Why It Terrifies the People Paid to Worry

To understand the Dario Amodei–Susie Wiles meeting and its national security implications, you must first understand what Anthropic’s Claude Mythos Preview actually does. Launched on April 7, 2026, Mythos is not a chatbot upgrade. It is, in the judgment of the cybersecurity community, a watershed event—a model of such extraordinary capability in identifying software vulnerabilities that it reportedly discovered thousands of zero-day flaws across major operating systems and browsers before breakfast.

Anthropic’s co-founder and policy chief Jack Clark, speaking at the Semafor World Economy Conference this week, described Mythos as having capabilities that could pose “severe” fallout for public safety, national security, and the economy. Washington Times He was not speaking hyperbolically. He was warning. Clark added that Mythos is not a “special model”—”there will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities.” PBS

This is the paradox that has split Washington clean in two. Mythos can map the defensive perimeter of any digital system with an acuity no human team could match. It can find the crack in the levy before the flood. But it can also—in theory, in the wrong hands, with the wrong prompts—hand an adversary the blueprint for that same attack. Its Mythos tool can identify cybersecurity threats but also present a roadmap for hackers to attack companies or the government. CNN One U.S. official, in a phrase that deserves to be carved somewhere permanent, told Axios: “They’re using this Mythos cyber weapon to find friendly ears in the government. They’re succeeding.” Axios

Recognizing this dual-use reality, Anthropic did not release Mythos publicly. Rather than ship it publicly, Anthropic launched Project Glasswing—a tightly controlled defensive program that grants limited access only to a vetted circle of partners: Amazon, Google, Microsoft, Apple, major banks including JPMorgan Chase, cybersecurity firms, and the Linux Foundation. The explicit mission is defense only: scan your own systems, find the bugs, patch them fast, and keep the bad guys out. Zero Hedge Anthropic also pledged up to $100 million in usage credits and $4 million in donations to open-source security groups.

It is, by any reckoning, an extraordinary act of self-regulation from a private company. It is also the act that made the U.S. government desperate to get inside the tent.

The Meeting: What We Know, and What It Really Means

The meeting, first reported by Axios, comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize potential risks. It marks a breakthrough in Amodei’s effort to resolve the company’s bitter AI fight with the Pentagon. Axios

The White House said the meeting was “introductory,” calling it “productive and constructive.” “We discussed opportunities for collaboration, as well as shared approaches and protocols to address the challenges associated with scaling this technology,” the White House said in a statement. “The conversation also explored the balance between advancing innovation and ensuring safety.” CNN

The diplomatic language obscures the pressure beneath. Treasury Secretary Scott Bessent joined the meeting, a notable escalation of seniority. “This is a big problem. Everyone’s complaining. There’s all this drama. So this got elevated to Susie to hear Dario out, determine what is bullsh-t and start to plot a way forward,” a Trump adviser told Axios. Axios

Those familiar with the negotiations describe what the White House is actually seeking: next steps are expected to be about how government departments engage with Anthropic’s new Mythos Preview model. Axios This is not abstract policy discussion. Some government agencies want access, and the White House and Anthropic are discussing the terms under which that might be possible. Two sources told Axios there are ongoing discussions, and agencies may get access to Mythos in the coming weeks. Axios

What Amodei wants in return is equally clear. He has drawn two lines in the sand that have proved non-negotiable: no use of Claude for mass domestic surveillance, and no deployment in fully autonomous weapons systems. Amodei noted that Anthropic has proactively deployed its models to the Department of War and the intelligence community, and was the first frontier AI company to deploy models in the U.S. government’s classified networks and at the National Laboratories. Attack of the Fanboy The Pentagon’s position—that it needs AI available for “all lawful purposes” without carve-outs—strikes many observers outside the building as, at minimum, an extraordinary demand to make of a private-sector partner.

From Pentagon Blacklist to White House Courtship: The Policy U-Turn

The speed of this reversal deserves its own chapter in any future history of American governance.

In late February, President Trump directed federal agencies to stop using Anthropic’s technology. In early March, the Defense Department formally designated Anthropic a supply-chain risk, effectively blocking its models from use on Pentagon contracts. CNN The designation—previously reserved for companies with ties to foreign adversaries—was applied to a San Francisco AI safety company because it refused to remove ethical guardrails. A federal judge in California, granting Anthropic a preliminary injunction, wrote that “nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”

Yet even as that legal fight raged, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned executives from JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley and urged them to use Anthropic’s new Mythos model to detect cybersecurity vulnerabilities in their systems. The Next Web The left hand of government was blacklisting what the right hand was urgently deploying.

Key officials in the Trump administration see Anthropic and its leaders as woke doomsters, and some relished slapping on the “supply chain risk” designation. But some of those same officials, and many others, also see Anthropic’s tools as best-in-class when it comes to AI for national security purposes. One Defense official told Axios at the height of the Pentagon-Anthropic feud that the only reason the talks were ongoing was: “these guys are that good.” Axios

This is the grotesque comedy—and the cold logic—of American AI policy in 2026. Ideological hostility colliding with operational necessity. The government cannot afford the luxury of its own grievance.

Geopolitical Stakes: China, Europe, and the New AI Arms Race

The Dario Amodei Susie Wiles meeting on AI national security cannot be understood outside its broader geopolitical frame. Jack Clark’s comment at Semafor was not idle—it was a countdown. A source close to negotiations told Axios: “It would be grossly irresponsible for the U.S. government to deprive itself of the technological leaps that the new model presents. It would be a gift to China.” Axios

China’s AI labs—DeepSeek, Zhipu, Baidu’s ERNIE—are advancing at a pace that was unimaginable eighteen months ago. The release of DeepSeek’s R1 model in early 2025 rattled markets and shattered the comfortable assumption that America’s compute advantage translated automatically into a capability lead. Beijing’s military-civil fusion doctrine means that any advance in Chinese commercial AI carries direct implications for the People’s Liberation Army. Anthropic has passed up several hundred million dollars to cut off use of Claude by firms linked to the Chinese Communist Party and shut down CCP-sponsored cyberattacks that attempted to abuse the system. Attack of the Fanboy

Europe, for its part, is watching from a peculiar position: deeply invested in AI safety regulation through the EU AI Act, yet without a frontier model lab of its own capable of matching Anthropic, OpenAI, or Google DeepMind. The UK’s NCSC and regulators are scrambling to assess Mythos’s risk profile. The asymmetry is uncomfortable: American and Chinese labs are racing to build and deploy the most powerful AI systems the world has seen, while Europe writes governance frameworks for systems that are already obsolete by the time the ink dries.

In this context, the U.S. government’s approach to Anthropic’s Mythos Preview and cybersecurity defense is not merely domestic policy. It is a strategic posture in a new kind of arms race—one where the weapons are invisible, the battlefield is software infrastructure, and the most dangerous adversary may be inaction itself.

The Opinion: Washington Must Choose

Let me say plainly what the diplomatic language of this week’s meetings cannot: the United States government does not have a coherent AI strategy. It has a collection of competing institutional impulses—the Pentagon’s maximalism, the intelligence community’s pragmatism, the Treasury’s alarm about financial infrastructure, and the White House’s moment-to-moment political management—loosely tethered by the fiction of a unified executive branch.

The Anthropic Mythos White House access negotiations expose this incoherence in full. A company is simultaneously being sued by one arm of the government and being courted by three others. The same model is being called a national-security threat and a national-security imperative, often by people in the same building. This is not policy. It is cognitive dissonance with a budget.

What Washington must do—and what this meeting, however “introductory,” at least gestures toward—is make a choice. Either frontier AI labs like Anthropic are strategic national assets to be cultivated under a framework of responsible access and negotiated guardrails, or they are private entities whose autonomy makes them inherently adversarial to state power. You cannot hold both positions at once, regardless of how many executive orders you issue.

The Anthropic model—safety-conscious development, controlled deployment through Project Glasswing, categorical refusal of certain military applications—is not naïveté. It is a serious attempt to thread a needle that governments have proven incapable of threading themselves. The Pentagon’s insistence on unrestricted access is not hardheadedness. It is institutional anxiety dressed as operational necessity. Between these poles, there is a deal to be made. But making it requires the kind of institutional self-honesty that bureaucracies resist until the cost of denial becomes catastrophic.

The cost is visible. Civilian agencies like the Departments of Energy and Treasury are responsible for safeguarding critical sectors like the electric grid and financial system. Axios Those systems are being probed, daily, by adversaries who will not wait for Washington to resolve its internal politics. Every week the impasse continues is a week the electric grid goes unscanned, the financial system goes unpatched, and the advantage shifts.

What Comes Next: For Regulators, Enterprises, and Citizens

The practical near-term architecture of whatever deal emerges from the Mythos negotiations is beginning to take shape. An internal Office of Management and Budget memo lays out strict protocols for safe access, data handling, and usage limits so that major departments can deploy Mythos against their own sprawling digital estates. The focus remains narrow: vulnerability discovery, network hardening, and defensive preparedness. Zero Hedge

For enterprises, the implications of Anthropic’s Mythos model for cybersecurity defense extend well beyond Washington. If Project Glasswing’s 40-plus organizations can use Mythos to discover and patch vulnerabilities faster than adversaries can exploit them, the model for critical infrastructure protection changes fundamentally. Security becomes proactive rather than reactive. The question is whether the access framework can scale—and whether Anthropic can maintain meaningful guardrails as it does.

A real compromise would likely mean granting Anthropic broader federal access for cybersecurity and software testing while preserving the safety commitments the company says define the product. For Washington, the tradeoff is stark: use a powerful model to harden government systems, or pressure the company to weaken the very restraints that make its technology acceptable in the first place. Prism News

For citizens, this matters in ways that extend far beyond any individual’s awareness of AI policy. The security of the national power grid, the integrity of the financial system, the resilience of government networks—these are not abstract concerns. They are the infrastructure on which daily life depends. The Mythos Preview is not, in the end, a tech industry story. It is a story about who gets to decide how the most powerful tools in human history are deployed, and under what terms.

The Kicker: The Future Is Already in the Room

Here is what the optimists and the catastrophists both miss: the most important fact about this moment is not that Anthropic’s Mythos model exists, nor that the White House is courting it, nor even that China is close behind. The most important fact is that every frontier model released from here forward will carry something like Mythos’s capabilities. The Pandora’s box is already open. The question is not whether to touch what’s inside. The question is whether to pick it up with gloves on—or with bare hands.

The Amodei-Wiles meeting, whatever its immediate outcome, represents the first serious acknowledgment by the American executive branch that the era of AI as an abstract policy problem is over. The technology is here, it is geopolitically consequential, and it will not wait for regulatory consensus. Washington can lead this transition with deliberate guardrails and structured public-private partnership, or it can continue managing it through institutional contradiction and inter-agency feuding until an adversary—human or algorithmic—exploits the gap.

The Friday meeting in the West Wing was quiet. But the decisions made in its aftermath will be anything but.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

Analysis

Wall Street Is Betting Against Private Credit — and That Should Worry Everyone

Published

on

When the architects of the private credit boom begin selling instruments that profit from its distress, the market has entered a new and more dangerous phase.

There is an old rule of thumb in credit markets: the moment the banks that helped build a structure start quietly pricing in its failure, it is time to pay very close attention. That moment arrived on April 13, 2026, when the S&P CDX Financials Index — ticker FINDX — began trading, giving Wall Street its first standardised credit-default swap benchmark explicitly linked to the private credit market. JPMorgan Chase, Bank of America, Barclays, Deutsche Bank, Goldman Sachs, and Morgan Stanley are all distributing the product. These are not peripheral players hedging tail risks. These are the same institutions that have spent a decade co-investing in, lending to, and marketing the very asset class they now offer clients a streamlined mechanism to short.

That is the headline. The deeper story is more unsettling.

The Product Nobody Was Supposed to Need

Credit-default swaps are, at their most basic, financial insurance contracts — the buyer pays a premium; the seller compensates the buyer if a specified borrower defaults. They became infamous in 2008, when an entire shadow banking system imploded partly because CDS had been written so liberally, by parties with no direct exposure to the underlying risk, that protection was illusory rather than real. What is remarkable about the CDX Financials launch is not the instrument itself but what its very existence confesses: private credit has grown so large, so interconnected, and now so stressed that the market has concluded it needs — finally — a public, liquid, standardised mechanism to hedge against its unravelling.

According to S&P Dow Jones Indices, the new FINDX comprises 25 North American financial entities, including banks, insurers, real estate investment trusts, and business development companies (BDCs). Approximately 12% of the equally weighted index is tied to private credit fund managers — specifically Apollo Global Management, Ares Management, and Blackstone. The index rises in value as credit sentiment toward its constituent entities deteriorates. In practical terms: buy protection on FINDX, and you profit when the private credit ecosystem comes under pressure.

Nicholas Godec, head of fixed income tradables and commodities at S&P Dow Jones Indices, described the launch as “the first instance of CDS linked to BDCs, thereby providing CDS linked to the private credit market.” That phrasing — careful, bureaucratic, almost bloodless — belies the signal embedded in the timing.

The Numbers Behind the Anxiety

To understand why this product exists, you need to understand the scale and velocity of the stress currently moving through private credit. The numbers, as of Q1 2026, are striking.

The Financial Times reported that U.S. private credit fund investors submitted a total of $20.8 billion in redemption requests in the first quarter alone — roughly 7% of the approximately $300 billion in assets held by the relevant non-traded BDC vehicles. This is not a trickle. Carlyle’s flagship Tactical Private Credit Fund (CTAC) received redemption requests equivalent to 15.7% of its assets in Q1, more than three times its 5% quarterly limit. Carlyle, like many of its peers, honoured only the cap and deferred the rest. Blue Owl’s Credit Income Corp saw shareholders request withdrawals equivalent to 21.9% of its shares in the three months to March 31 — an extraordinary figure that prompted Moody’s to revise its outlook on the fund from stable to negative. Blue Owl, Blackstone, KKR, Apollo, and Ares have all faced redemption queues this cycle.

Moody’s has since downgraded its outlook on the entire U.S. BDC sector from “stable” to “negative” — a formal acknowledgement that what was once a bull-market darling is now contending with structural liquidity stresses that its semi-liquid product architecture was never fully designed to survive.

Meanwhile, the credit quality of the underlying loans is deteriorating in ways that the sector’s historical marketing materials simply did not anticipate. UBS strategists have projected that private credit default rates could rise by as much as 3 percentage points in 2026, far outpacing the expected 1-percentage-point rise in leveraged loans and high-yield bonds. Morgan Stanley has warned that direct lending default rates could surge as high as 8%, compared with a historical average of 2–2.5%. Payment-in-kind loans — where borrowers pay interest in additional debt rather than cash — are rising, a classic signal of borrowers under duress who are conserving liquidity at the expense of lender economics.

Perhaps most damning: in late 2025, BlackRock’s TCP Capital Corp reported that writedowns on certain portfolio loans reduced its net asset value by 19% in a single quarter.

The AI Dislocation: A Crisis Within the Crisis

No serious analysis of this stress cycle can ignore the role of artificial intelligence in accelerating it. Roughly 20% of BDC portfolio exposure, according to Jefferies research, is concentrated in software businesses — predominantly SaaS companies that private credit firms financed at generous valuations during the zero-interest-rate boom years. The rapid advance of AI tools capable of automating software workflows has sparked a brutal re-evaluation of those companies’ competitive moats, revenue durability, and, ultimately, their debt-service capacity.

Blue Owl, one of the largest direct lenders to the tech-software sector, has faced redemption requests that are — in the words of its own investor communications — reflective of “heightened negative sentiment towards direct lending” driven in part by AI-sector uncertainty. The irony is profound: private credit funds that rushed to finance the digital economy are now discovering that the same technological disruption they helped capitalise is undermining the creditworthiness of their borrowers.

This is not a transient sentiment shock. According to Man Group’s private credit team, private credit loans are originated with the “express purpose of being held to maturity.” That structural illiquidity — the attribute that was once marketed as a yield premium — is now the attribute that makes the sector’s stress harder to contain. When your borrowers are software companies facing existential competitive threats and your investors are retail wealth clients who were sold on liquidity promises, the collision produces exactly what we are now observing: gating, deferred redemptions, and a derivatives market emerging to price what the underlying funds cannot.

What Wall Street Is Really Saying

The CDX Financials launch is not merely a new product. It is a confession.

When the Wall Street Journal first reported the index’s development, analysts initially framed it as a neutral hedging tool — a risk management mechanism that sophisticated market participants had long wanted access to. And in the narrow technical sense, that framing is accurate. Hedge funds with concentrated exposure to BDC equity positions, pension funds with indirect private credit allocations, and banks with syndicated loan books have legitimate demand for an instrument that allows them to offset their exposure.

But consider the posture this represents. JPMorgan, Goldman Sachs, Morgan Stanley, and Barclays built, distributed, and marketed private credit products to institutional and retail clients throughout the 2015–2024 expansion. They collected billions in fees doing so. They celebrated the asset class’s growth — the private credit market has expanded to more than $3 trillion in AUM — as evidence of financial innovation serving real-economy borrowers who couldn’t access public markets. Those same institutions have now co-created a benchmark instrument whose primary utility is to profit, or hedge risk, when that market contracts.

This is not cynicism — it is rational risk management. But it is also a market signal of extraordinary clarity: the largest, best-informed participants in global credit markets have concluded that the probability-weighted downside in private credit is now large enough to justify the cost and complexity of derivative infrastructure. You do not build a CDX index for a market in good health.

Regulatory Fault Lines and the Retail Investor Problem

Perhaps the most underappreciated dimension of this crisis is distributional. Private credit’s expansion over the last decade was partly funded by a deliberate push by asset managers into the wealth management channel — retail and high-net-worth investors who were attracted by the yield premium over public credit and the low apparent volatility of funds that mark their assets infrequently and to model rather than to market.

That low apparent volatility, as analysts at Robert A. Stanger & Co. have pointed out, was partly a function of the valuation methodology rather than the underlying risk. BDCs in the non-listed space can appear stable in their net asset values right up until the moment they are not — and the quarterly redemption gates now being enforced create a first-mover advantage for those who recognise the stress earliest. Institutional investors — the “small but wealthy group” who have been demanding exits — have done exactly that. Retail investors, who typically receive quarterly statements and rely on fund managers’ own assessments of value, are disproportionately likely to be last out.

The Securities and Exchange Commission has been examining BDC valuation practices and the structural question of whether semi-liquid products are appropriately matched to the liquidity expectations of retail investors. The CDX Financials launch materially increases the regulatory pressure surface. It is considerably harder to argue that private credit is a stable, low-volatility asset class suitable for retail distribution when the major banks are simultaneously selling derivatives that facilitate bearish bets on its constitutent managers.

The regulatory trajectory points toward tighter disclosure requirements on BDC valuation methodologies, stricter rules on redemption queue transparency, and potentially new suitability standards for the sale of semi-liquid alternatives to retail investors. None of these changes will arrive in time to protect those already queuing to exit.

The European and EM Dimension

The stress in U.S. private credit has a global undertow that commentary focused on Wall Street mechanics tends to underweight. European direct lenders — many of them subsidiaries or affiliates of the same U.S. managers now under pressure — have similarly expanded into software, healthcare services, and leveraged buyout financing across France, Germany, the Nordics, and the UK. The Bank for International Settlements has flagged the opacity and rapid growth of private credit in advanced economies as a potential systemic risk vector, precisely because the infrequent and model-dependent valuation of these assets makes cross-border contagion difficult to detect in real time.

Emerging market economies face a different but related challenge. Domestic sovereign and corporate borrowers who were priced out of traditional bank lending and public bond markets during periods of dollar strength and risk-off sentiment found private credit as an alternative source of capital. As U.S. private credit funds come under redemption pressure and face potential portfolio de-risking, the marginal withdrawal of credit availability to EM borrowers represents a secondary shock that will not appear in U.S. financial statistics but will very much appear in the economic data of the borrowing countries.

The CDX Financials, for now, is a North American product focused on North American entities. But if the private credit stress deepens, the transmission mechanism to European and EM markets will operate through the same channel it always does: abrupt, disorderly credit withdrawal by institutions that had presented themselves to borrowers as patient, relationship-oriented capital.

The 2026–2027 Outlook: Three Scenarios

Scenario one: Controlled decompression. The redemption pressure peaks in mid-2026 as Q1 earnings are digested, valuations are reset modestly, and AI sector concerns stabilise. The CDX Financials remains a niche hedging tool with modest trading volumes. Default rates rise but remain below 5%. Fund managers gradually improve their liquidity management frameworks, and the episode is remembered as a stress test that the sector passed — awkwardly, but passed.

Scenario two: Structural repricing. Default rates reach the 6–8% range forecast by Morgan Stanley. Fund managers are forced to sell assets to meet redemptions, creating mark-to-market pressure that triggers further investor withdrawals — a slow-motion version of the bank run dynamic. The CDX Financials becomes a liquid, actively traded instrument as hedge funds build short theses against specific managers. The SEC intervenes with new rules. The retail wealth channel for private credit permanently contracts, and the asset class re-professionalises toward institutional-only distribution.

Scenario three: Systemic cascade. A rapid confluence of AI-driven borrower defaults, leveraged BDC balance sheets, and sudden insurance company mark-to-market requirements — recall that insurers have become significant private credit allocators — creates a feedback loop that overwhelms the quarterly gate mechanisms. This scenario remains tail-risk rather than base case, but it is materially more probable today than it was eighteen months ago, and the CDX Financials market, whatever its current illiquidity, provides the mechanism through which this scenario’s probability will be priced in real time.

The Signal in the Noise

There is a temptation, in moments like this, to reach for the 2008 parallel — the credit-default swaps written on mortgage-backed securities, the opacity, the interconnection, the eventual reckoning. That parallel is not fully appropriate. Private credit, for all its stress, is not leveraged to the degree that pre-crisis structured finance was, and the counterparties on the other side of these loans are corporate borrowers rather than millions of individual homeowners facing income shocks. The system is not on the edge of a cliff.

But the more honest framing is this: private credit grew from approximately $500 billion to more than $3 trillion in a decade, fuelled by zero interest rates, a regulatory environment that pushed lending off bank balance sheets, and an institutional appetite for yield that sometimes outpaced rigour. It attracted retail investors on the promise of bond-like returns with equity-like stability. It financed technology businesses at valuations that assumed a competitive landscape that artificial intelligence is now radically disrupting. And it did all of this in a structure — the non-traded BDC, the evergreen fund — that made liquidity appear more plentiful than it was.

The CDX Financials is what happens when the market runs the numbers on all of that and concludes it wants an exit option. For investors still inside these funds, that signal deserves very careful attention.

Conclusion: What Sophisticated Investors Should Do Now

The launch of private credit derivatives is not, by itself, a crisis. It is a maturation — the belated arrival of price discovery infrastructure into a corner of credit markets that had, until now, avoided the bracing discipline of public market scrutiny. In that sense, the CDX Financials is a healthy development. Transparency, even painful transparency, is preferable to opacity.

But for investors with allocations to non-traded BDCs, evergreen private credit funds, or insurance products with significant private credit exposure, several questions now demand answers that fund managers may be reluctant to provide. What is the true liquidity profile of the underlying loan portfolio? What percentage of the portfolio is in payment-in-kind status? How much of the nominal NAV reflects model-based valuations that have not been stress-tested against the current AI-driven sector disruption? And — most importantly — what is the fund’s plan if redemption requests in Q2 and Q3 2026 do not moderate?

The banks selling CDX Financials protection have already decided how to answer those questions for their own books. Investors would do well to ask the same questions of their own.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending

Copyright © 2025 The Economy, Inc . All rights reserved .

Discover more from The Economy

Subscribe now to keep reading and get access to the full archive.

Continue reading