Analysis
Is Anthropic Protecting the Internet — or Its Own Empire?
Anthropic Mythos, the most powerful AI model any lab has ever disclosed, arrived this week draped in the language of altruism. Project Glasswing — the initiative through which a curated circle of Silicon Valley aristocrats gains exclusive access to Mythos — is pitched as an act of civilizational defense. The framing is elegant, the mission is genuinely urgent, and at least part of it is true. But behind the Mythos AI release lies a second story that Dario Amodei’s beautifully worded blog posts conspicuously omit: Mythos is enterprise-only not merely because Anthropic fears hackers, but because releasing it to the open internet would trigger the single greatest act of industrial-scale capability theft in the history of technology. The cybersecurity rationale is real. The economic motive is realer still. Understanding both is how you understand the AI industry in 2026.
What Anthropic Mythos Actually Does — and Why It Terrified Silicon Valley
To appreciate the gatekeeping, you must first reckon with the capability. Mythos is not an incremental model. It occupies an entirely new tier in Anthropic’s architecture — internally designated Copybara — sitting above the public Haiku, Sonnet, and Opus hierarchy that most developers work with. SecurityWeek’s detailed technical breakdown describes it as a step change so pronounced that calling it an “upgrade” is like calling the internet an “improvement” on the fax machine.
The numbers are staggering. Anthropic’s own Frontier Red Team blog reports that Mythos autonomously reproduced known vulnerabilities and generated working proof-of-concept exploits on its very first attempt in 83.1% of cases. Its predecessor, Opus 4.6, managed that feat almost never — near-0% success rates on autonomous exploit development. Engineers with zero formal security training now tell colleagues of waking up to complete, working exploits they’d asked the model to develop overnight, entirely without intervention. One test revealed a 27-year-old bug lurking inside OpenBSD — an operating system historically celebrated for its security — that would allow any attacker to remotely crash any machine running it. Axios reported that Mythos found bugs in every major operating system and every major web browser, and that its Linux kernel analysis produced a chain of vulnerabilities that, strung together autonomously, would hand an attacker complete root control of any Linux system.
Compare that to Opus 4.6, which found roughly 500 zero-days in open-source software — itself a remarkable achievement. Mythos found thousands in a matter of weeks. It then attempted to exploit Firefox’s JavaScript engine and succeeded 181 times, compared to twice for Opus 4.6.
This is also, importantly, what a Claude Mythos vs open source cybersecurity comparison looks like at full resolution: no freely available model comes remotely close, and Anthropic knows it. That gap is the entire product.
The Official Narrative: “We’re Protecting the Internet”
The Anthropic enterprise-only AI decision is framed through Project Glasswing as a coordinated defensive effort — an attempt to patch the world’s most critical software before capability equivalents proliferate to hostile actors. Anthropic’s official Glasswing page commits $100 million in usage credits and $4 million in direct donations to open-source security organizations, with founding partners that read like a geopolitical alliance: Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, and Palo Alto Networks. Roughly 40 additional organizations maintaining critical software infrastructure also gain access. The initiative’s name — Glasswing, after a butterfly whose transparency makes it nearly invisible — is a metaphor for software vulnerabilities that hide in plain sight.
The security rationale for why Anthropic limited Mythos is not confected. In September 2025, a Chinese state-sponsored threat actor used earlier Claude models in what SecurityWeek documented as the first confirmed AI-orchestrated cyber espionage campaign — not merely using AI as an advisor but deploying it agentically to execute attacks against roughly 30 organizations. If that was possible with Claude’s then-current models, what becomes possible with a model that autonomously chains Linux kernel exploits at a near-perfect success rate?
Anthropic’s Logan Graham, head of the Frontier Red Team, captured the threat succinctly: imagine this level of capability in the hands of Iran in a hot war, or Russia as it attempts to degrade Ukrainian infrastructure. That is not science fiction. It is the calculus driving the controlled release. Briefings to CISA, the Commerce Department, and the Center for AI Standards and Innovation are real, however conspicuously absent the Pentagon remains from those conversations — a pointed omission given Anthropic’s ongoing legal war with the Defense Department over its blacklisting.
So yes: the security case is genuine. But it is, at most, half the story.
The Distillation Flywheel: Why Frontier Labs Are Really Gating Their Best Models
Here is the economic argument that no TechCrunch brief or Bloomberg data point has assembled cleanly: Anthropic model distillation is an existential threat to the frontier lab business model, and Mythos is as much a response to that threat as it is a cybersecurity initiative.
The mathematics of adversarial distillation are brutally asymmetric. Training a frontier model costs approximately $1 billion in compute. Successfully distilling it into a competitive student model costs an adversary somewhere between $100,000 and $200,000 — a 5,000-to-one cost advantage in the favor of the copier. No rate-limiting policy, no terms-of-service clause, and no click-through agreement closes that gap. The only defense is controlling access to the teacher in the first place.
Frontier lab distillation blocking is not a new concern, but 2026 has given it terrifying specificity. Anthropic publicly disclosed in February that three Chinese AI laboratories — DeepSeek, Moonshot AI, and MiniMax — collectively generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts. MiniMax alone accounted for 13 million of those exchanges; Moonshot AI added 3.4 million; DeepSeek, notably, needed only 150,000 because it was targeting something far more specific: how Claude refuses things — alignment behavior, policy-sensitive responses, the invisible architecture of safety. A stripped copy of a frontier model without its alignment training, deployed at nation-state scale for disinformation or surveillance, is the nightmare scenario that animated Anthropic’s founding. It may now be unfolding in real time.
What does this have to do with Mythos being enterprise-only? Everything. A model that autonomously writes working exploits for every major OS would, if released via standard API access, provide Chinese distillation campaigns with not just conversational capability but offensive cyber capability — the very thing that makes Mythos commercially unique. Releasing Mythos at scale would be, simultaneously, the greatest act of market self-destruction and the greatest gift to adversarial state actors in the history of enterprise software. Enterprise-only access eliminates both risks at once: it monetizes the capability at maximum margin while denying it to the distillation ecosystem.
This is the distillation flywheel in action. Frontier labs gate the highest-capability models behind enterprise contracts; enterprises pay premium rates for exclusive capability access; the revenue funds the next generation of training runs; the new model is again too powerful to release openly. Each rotation of the wheel deepens the competitive moat, raises the enterprise price floor, and tightens the grip of the three dominant labs over the global AI stack.
Geopolitics at the Model Layer: The Three-Lab Alliance and the New AI Cold War
The Mythos security exploits announcement arrived within 24 hours of a Bloomberg-reported development that is arguably more consequential for the global technology order: OpenAI, Anthropic, and Google — three companies that have spent the better part of three years competing to annihilate each other — began sharing adversarial distillation intelligence through the Frontier Model Forum. The cooperation, modeled on how cybersecurity firms exchange threat data, represents the first substantive operational use of the Forum since its 2023 founding.
The breakdown of what each Chinese lab extracted from Claude reveals something remarkable: three entirely different product strategies, fingerprinted through their query patterns. MiniMax vacuumed broadly — generalist capability extraction at scale. Moonshot AI targeted the exact agentic reasoning and computer-use stack that its Kimi product has been marketing since late 2025. DeepSeek, with a comparatively tiny 150,000-exchange footprint, was almost exclusively interested in Claude’s alignment layer — how it handles policy-sensitive queries, how it refuses, how it behaves at the edges. Each lab was essentially reverse-engineering not just a model but a business plan.
The MIT research documented in December 2025 found that GLM-series models identify themselves as Claude approximately half the time when queried through certain paths — behavioral residue of distillation that no fine-tuning has fully scrubbed. US officials estimate the financial toll of this campaign in the billions annually. The Trump administration’s AI Action Plan has already called for a formal inter-industry sharing center, essentially institutionalizing what the labs are now doing informally.
The geopolitical stakes here extend far beyond corporate IP. When DeepSeek released its R1 model in January 2025 — a model widely believed to incorporate distilled knowledge from OpenAI’s infrastructure — it erased nearly $1 trillion from US and European tech stocks in a single trading session. Markets now understand something that policymakers are only beginning to grasp: control over frontier AI model capabilities is a form of strategic leverage, and distillation is a vector for transferring that leverage without a single line of export-controlled chip silicon crossing a border.
Enterprise Contracts and the New AI Treadmill
The economics of Anthropic enterprise-only AI are becoming increasingly clear as 2026 revenue data enters the public domain.
| Metric | February 2026 | April 2026 |
|---|---|---|
| Anthropic Run-Rate Revenue | $14B | $30B+ |
| Enterprise Share of Revenue | ~80% | ~80% |
| Customers Spending $1M+ Annually | 500 | 1,000+ |
| Claude Code Run-Rate Revenue | $2.5B | Growing rapidly |
| Anthropic Valuation | $380B | ~$500B+ (IPO target) |
| OpenAI Run-Rate Revenue | ~$20B | ~$24-25B |
Sources: CNBC, Anthropic Series G announcement, Sacra
Anthropic’s annualized revenue has now surpassed $30 billion — having started 2025 at roughly $1 billion — representing one of the most dramatic B2B revenue trajectories in the history of enterprise software. Sacra estimates that 80% of that revenue flows from business clients, with enterprise API consumption and reserved-capacity contracts forming the structural backbone. Eight of the Fortune 10 are now Claude customers. Four percent of all public GitHub commits are now authored by Claude Code.
What Project Glasswing does, in this context, is elegant: it creates a new category of enterprise relationship — not API access, not subscription, but strategic partnership with a frontier safety lab deploying the world’s most capable unrestricted model. The 40 organizations in the Glasswing program are not merely beta testers. They are, from a revenue architecture standpoint, being trained — habituated to Mythos-class capability before it becomes generally available, embedded in their security workflows, their CI/CD pipelines, their vulnerability management systems. By the time Mythos-class models are released at scale with appropriate safeguards, the switching cost will be prohibitive.
This is the AI treadmill: each generation of frontier capability, released exclusively to enterprise partners first, creates a loyalty layer that commoditized open-source alternatives cannot easily displace. The $100 million in Glasswing credits is not charity. It is customer acquisition at an unprecedented model tier.
The Counter-View: Responsible Deployment Has a Principled Case
It would be intellectually dishonest to leave the distillation-flywheel critique standing without challenge. The counter-argument is real, and it deserves full articulation.
Platformer’s analysis makes the most compelling version of the responsible-rollout defense: Anthropic’s founding premise was that a safety-focused lab should be the first to encounter the most dangerous capabilities, so it could lead mitigation rather than react to catastrophe. With Mythos, that appears to be exactly what is happening. The company did not race to monetize these cybersecurity capabilities. It briefed government agencies, convened a defensive consortium, committed $4 million to open-source security projects, and staged rollout behind a coordinated patching effort. The vulnerabilities Mythos found in Firefox, Linux, and OpenBSD are being disclosed and patched before the paper trail of their discovery becomes public — precisely the protocol that responsible security research demands.
Alex Stamos, whose expertise in adversarial security spans decades, offered the optimistic framing: if Mythos represents being “one step past human capabilities,” there is a finite pool of ancient flaws that can now be systematically found and fixed, potentially producing software infrastructure more fundamentally secure than anything achievable through traditional auditing. That is not corporate spin. It is a coherent theory of defensive AI benefit.
The Mythos AI release strategy also reflects a genuinely novel regulatory challenge: the EU AI Act’s next enforcement phase takes effect August 2, 2026, introducing incident-reporting obligations and penalties of up to 3% of global revenue for high-risk AI systems. A general release of Mythos into that environment — without governance infrastructure in place — would be commercially catastrophic as well as potentially harmful. Enterprise-gated release buys time for both the regulatory and technical scaffolding to mature.
What Regulators and Open-Source Advocates Must Do Next
The policy implications of Anthropic Mythos extend far beyond one company’s release strategy. They illuminate a structural shift in how frontier AI capability is being distributed — and by whom, and to whom.
For regulators, the Glasswing model raises questions that existing frameworks cannot answer. If a private company now possesses working zero-day exploits for virtually every major software system on earth — as Kelsey Piper pointedly observed — what obligations of disclosure and oversight apply? The fact that Anthropic is briefing CISA and the Center for AI Standards and Innovation is encouraging, but voluntary briefings are not governance. The EU’s AI Act and the US AI Action Plan both need explicit provisions covering what happens when a commercially controlled lab becomes the de facto custodian of the world’s most significant vulnerability database.
For open-source advocates, the distillation dynamic poses an existential dilemma. The same economic logic that drives labs to gate Mythos also drives them to resist open-weights releases of any model that approaches frontier capability. The three-lab alliance against Chinese distillation is, viewed from a certain angle, also an alliance against open-source proliferation of frontier capability — regardless of the nationality of the developer doing the distilling. Open-source foundations, university research labs, and sovereign AI initiatives in Europe, the Middle East, and South Asia should be pressing hard for access frameworks that allow defensive cybersecurity use of frontier capability without being filtered through the commercial relationships of Silicon Valley.
For enterprise decision-makers, the message is unambiguous: the organizations that embed Mythos-class capability into their vulnerability management workflows now will hold a structural security advantage — measured in patch latency and zero-day coverage — over those that wait for open-source equivalents. But that advantage comes with dependency on a single private entity whose political entanglements, from Pentagon disputes to Chinese state-actor confrontations, introduce supply-chain risks that no CISO should ignore.
Anthropic may well be protecting the internet. It is certainly protecting its empire. In 2026, those two imperatives have become so entangled that distinguishing them may be the most important work left for anyone who cares about who controls the infrastructure of the digital world.