AI
Gwynne Shotwell’s Moonshot: How SpaceX Plans to Build AI Data Centers in Orbit and Manufacture Satellites on the Lunar Surface
The woman behind history’s most valuable private company is steering a $1.25-trillion enterprise toward a future where artificial intelligence lives in space — and is built on the Moon.
On a Friday morning in February, inside a building roughly the size of sixteen football fields, the air smells of stainless steel and ambition. Eighteen Starship spacecraft line the gleaming white floor of SpaceX’s Starfactory in Starbase, Texas — some nothing more than enormous cylindrical barrels, nearly 30 feet across, awaiting their destinies. Others stand fully assembled, tapered nosecones already fitted, ready to be lifted atop their towering first-stage boosters to form a rocket that, at 40 stories, dwarfs every launch vehicle in history. Walking a high catwalk above this cathedral of engineering, surveying the controlled chaos below, is Gwynne Shotwell — President and COO of SpaceX, nearly 24 years into her tenure, and now the operational commander of what has quietly become the most consequential company on Earth.
“By 2028,” she says, casting her gaze across the factory floor, “these should be long gone. They better have flown by then.”
That sentence carries more weight than it might seem. Because buried inside it — inside every weld seam and stainless-steel barrel on that factory floor — is a plan to reshape not just how humanity reaches space, but what humanity does once it gets there. Shotwell and SpaceX are not simply building rockets. They are constructing the physical infrastructure for a new civilization’s computing backbone: artificial intelligence data centers in orbit, satellite manufacturing plants on the Moon, and a trillion-dollar company preparing to go public in what will likely be the largest IPO in capital markets history.
The Gwynne Shotwell AI Moon strategy is no longer a vision statement. It is an engineering program.
From Employee No. 7 to the World’s Most Valuable Company
Shotwell joined SpaceX in 2002 as its seventh employee, having persuaded a young Elon Musk over a cocktail-party conversation that his fledgling rocket venture desperately needed someone to sell it to the world. She was right then, and she has been right about most things since. Over more than two decades, she transformed SpaceX from an eccentric California startup that nearly went bankrupt in 2008 into a $1.25-trillion enterprise that dominates commercial launch, operates the world’s largest satellite constellation, and holds multi-billion-dollar contracts with both NASA and the U.S. Department of Defense.
The metrics alone are staggering. SpaceX’s Falcon 9 has now completed more than 630 successful launches, including a record 165 flights in 2025 alone. Starlink, the satellite internet service Shotwell championed from early ideation, now serves over 9.2 million active subscribers globally and generated more than $10 billion in revenue last year. The company reported approximately $16 billion in total revenue for 2025 and, according to Reuters, profit approaching $8 billion — numbers that would place it comfortably among the most profitable technology companies in the world, if it were public.
As of February 2026, it is becoming something larger. On February 2, SpaceX announced a landmark merger with xAI, Elon Musk’s artificial intelligence company, in an all-stock deal that valued the combined entity at $1.25 trillion — the largest private merger in recorded history. With a targeted IPO valuation now approaching $1.75 trillion, SpaceX is preparing to file its S-1 prospectus for a June 2026 listing that analysts expect to raise more than $75 billion, shattering Saudi Aramco’s $29.4 billion record from 2019.
Shotwell’s role is expanding accordingly. “It will morph over time,” she told TIME, “which is how my role has always gone.”
That is a characteristically understated way of describing what amounts to the operational merger of the world’s most powerful launch infrastructure with one of the most capable AI research programs on the planet. NASA Administrator Bill Nelson once said of Musk: “One of the most important decisions he made is he picked a president named Gwynne Shotwell. She runs SpaceX. She is excellent.” The coming years will test that excellence at a scale no executive in aerospace has ever faced.
The Convergence: Why SpaceX Needed xAI, and Vice Versa
To understand why Musk structured this merger — and why Shotwell is now driving its integration — you need to understand what AI actually needs, and what AI actually costs.
Global data center electricity consumption is projected to exceed 1,000 terawatt-hours in 2026, nearly double what it was just four years ago. A January 2026 report by Bloom Energy projects that U.S. data centers’ total combined energy demand will nearly double between 2025 and 2028, from 80 to 150 gigawatts — the equivalent of adding a country with Spain’s entire energy consumption in just three years. Goldman Sachs projects that data center power consumption will push core inflation up by 0.1 percent in both 2026 and 2027, as capacity market prices in key grid regions spike tenfold. Water is equally strained: AI data centers consume billions of gallons annually for cooling, concentrated precisely in the driest American regions where solar power is abundant.
This is not a minor inefficiency. It is a civilizational bottleneck.
Musk identified it publicly at the World Economic Forum in Davos in January: “The lowest-cost place to put AI will be in space, and that will be true within two years, maybe three at the latest.” Over the past three weeks, SpaceX has filed plans with the FCC for what amounts to a million-satellite data-center network. Shotwell confirmed in her TIME interview that she is “surprised it got little news” — an observation that speaks to how dramatically the mainstream press has underestimated the technical and economic substance of this plan.
The physics of orbital computing are compelling. According to a Starcloud whitepaper referenced by the World Economic Forum, a solar array in a dawn-dusk sun-synchronous orbit can generate over five times the energy of an equivalent array on Earth, achieving a capacity factor above 95 percent compared to just 24 percent for terrestrial solar farms. Cooling — the other existential problem for data centers — becomes passively trivial: deep space is roughly 270 degrees Celsius colder than room temperature, eliminating the need for energy-intensive chillers and fresh-water cooling systems entirely. According to IEEE Spectrum analysis, one architecture envisions a 240-kilowatt satellite housing two GPU racks with 144 processors, networked across 4,300 satellites to deliver a gigawatt of computing power.
For SpaceX, the logic is circular in the most profitable possible way. Shotwell put it plainly: “Starlink basically created this incredible demand for Falcon 9, and the AI satellites will do the same for Starship launches.” The more AI satellites SpaceX needs to launch, the more Starships must fly. The more Starships fly, the cheaper and more reliable each flight becomes. The cheaper each flight becomes, the more economically rational it is to move computing infrastructure to orbit. It is a flywheel that no other company on Earth has the launch capacity to spin.
The Technical Architecture: What a SpaceX Orbital Data Center Actually Looks Like
The FCC filing for up to one million AI satellites is not a placeholder. It reflects a specific engineering vision that has been taking shape inside both SpaceX and xAI since at least mid-2025.
The satellites themselves are conceptually distinct from Starlink’s existing broadband mesh. Rather than routing internet traffic between ground stations and end users, these AI satellites would function as distributed compute nodes — effectively, server farms in orbit. Each would carry specialized processing hardware, draw on continuous solar generation, and radiate waste heat passively into deep space through large metallic panels. Their orbital positioning would be optimized not primarily for latency to ground users, but for inter-satellite laser communication links that minimize the lag between compute nodes.
The merger with xAI provides the software layer: Grok’s large language models, reasoning engines, and inference systems would run natively on this distributed space-based architecture. The integration of Starlink’s global satellite mesh with xAI’s language models is explicitly designed to move massive compute workloads into space to exploit continuous solar energy and natural radiative cooling. This reframes the entire competitive landscape for SpaceX. The company would no longer be competing with Boeing or Lockheed Martin for launch contracts. It would be competing — and potentially undercutting — Microsoft Azure, Amazon Web Services, and Google Cloud, while being the only provider on Earth that controls launch vehicles, satellite hardware, and the AI models running on top of them.
The Lunar Gambit: Mass Drivers, Mining, and Manufacturing on the Moon
If the orbital AI constellation sounds audacious, the lunar vision that follows is genuinely unprecedented in the history of industrial planning.
Shotwell’s preferred scenario — which she describes as achievable “ideally in five years” — involves constructing a manufacturing base on the lunar surface capable of producing AI satellites from materials mined on the Moon. The gravitational physics are the core argument: with lunar gravity at roughly one-sixth of Earth’s, launching a payload from the Moon’s surface requires exponentially less energy than lifting an equivalent mass off Earth. Mass drivers — electromagnetic catapults that accelerate cargo along a track before releasing it into space — would serve as the primary launch mechanism, since the Moon’s lack of atmosphere eliminates aerodynamic drag entirely. The combination of locally sourced materials, in-situ manufacturing, and electromagnetic launch could reduce the effective cost of deploying each AI satellite by an order of magnitude compared to Earth-based production and Starship-based launch.
“If we’re building these satellites on the Moon with elements and materials from the Moon,” Shotwell told TIME, “it would be much faster and cheaper to launch them.”
This is not science fiction. The Moon’s regolith contains silicon, aluminum, iron, titanium, and oxygen in exploitable concentrations. Semiconductor fabrication from lunar silicon is technically challenging but not physically impossible. The governance question — who regulates a private lunar manufacturing base, and under what legal framework — remains genuinely unresolved; Shotwell acknowledged as much in her TIME interview. “It’s a great question,” she said of how a lunar city might be governed, “and I don’t know the answer.”
That honesty is telling. SpaceX is moving faster than the regulatory frameworks designed to constrain it, which is both its greatest competitive advantage and its most significant long-term liability.
The Artemis Alignment: Moon First, Mars Later
The lunar manufacturing vision intersects with a more immediate program: NASA’s Artemis initiative to return humans to the Moon. SpaceX’s Starship is the designated Human Landing System (HLS) for Artemis IV, currently targeting a crewed touchdown in early 2028. “It’s a hard problem and the whole architecture is complex,” Shotwell said, “but we’re gunning for 2028.”
Standing on the Starfactory catwalk and gesturing at the assembled vehicles below, she added: “By 2028, these should be long gone. They better have flown by then.”
The strategic logic of prioritizing the Moon over Mars — a subtle but significant shift from SpaceX’s founding narrative — is now explicit. Musk himself has described the near-term focus as a “self-growing city on the Moon” achievable within a decade, while Shotwell carefully insists the Mars vision has not been abandoned. What has changed is sequencing: the Moon offers both a near-term demonstration platform for SpaceX’s infrastructure capabilities and a potential manufacturing base that could dramatically accelerate the Mars timeline.
The geopolitical dimension of this sequencing deserves underscoring. China’s lunar ambitions are advancing on a parallel track: the China National Space Administration has targeted a crewed lunar landing by 2030 and has announced its intention to establish a permanent lunar research station by 2035. The industrial and strategic implications of whichever nation — or private entity — first establishes durable manufacturing infrastructure on the Moon are difficult to overstate. Control of the Moon’s resources, particularly water ice at the poles that could be converted to rocket propellant, could determine the economics of deep space access for decades.
Starship: The Machine That Makes It Possible
None of this is achievable without Starship — and Starship, in 2026, is finally becoming real.
Eleven uncrewed Starships have been launched since 2023, each producing 16.7 million pounds of thrust from its 33 first-stage engines — more than double the ground-shaking power of the Apollo-era Saturn V. The Super Heavy booster’s catch system — whereby the launch tower’s mechanical arms literally catch the returning booster mid-air — has now been demonstrated successfully, representing arguably the most dramatic reusability achievement in aerospace history.
| Vehicle | First Stage Thrust | Payload to LEO | Reusability |
|---|---|---|---|
| SpaceX Starship | 16.7 million lb (33 engines) | ~150 tonnes (target) | Full stack reusable |
| Saturn V | ~7.9 million lb (5 engines) | 130 tonnes | Expendable |
| SpaceX Falcon 9 | ~1.7 million lb (9 engines) | 22.8 tonnes | Booster reusable |
| United Launch Alliance Vulcan | ~1.7 million lb (2 engines) | 27 tonnes | Expendable |
Starship’s payload capacity and full reusability are what make the orbital AI constellation economically conceivable. A single Starship mission can deliver dozens of satellites simultaneously; with rapid reuse, the marginal cost per kilogram continues to fall toward targets that would have seemed hallucinatory a decade ago. Shotwell’s estimate that Starlink’s internal demand drove Falcon 9 reliability gains applies equally to what AI satellite demand will do for Starship: the production pressure of 1 million AI satellites is not a bug in the plan. It is the reliability engine.
Challenges, Risks, and the Skeptics’ Case
To engage seriously with this vision requires engaging seriously with its obstacles.
Launch economics at scale: Even with SpaceX driving down costs, launching hardware into orbit still runs roughly $1,500 per kilogram. A functional AI satellite with meaningful compute density — two GPU racks, as in the IEEE architecture — would weigh hundreds of kilograms. At current prices, scaling to one million satellites is a multi-trillion-dollar proposition before manufacturing costs are counted.
Latency: Signals traveling to low Earth orbit and back introduce delays of roughly 20-40 milliseconds — manageable for most workloads, but potentially problematic for real-time inference applications. For geostationary orbit, round-trip latency approaches 240 milliseconds, which is genuinely prohibitive for many AI use cases.
Radiation hardening: Consumer-grade semiconductors degrade rapidly in orbit’s radiation environment. Radiation-hardened components cost significantly more and typically lag terrestrial chips by several generations in computational efficiency.
Space traffic: Shotwell acknowledged the debris concern in her TIME interview, comparing 30,000 satellites to 30,000 cars — sparse if positions are known and communicated. But 1 million satellites is an order of magnitude beyond anything currently in orbit, and regulators at the FCC, ITU, and equivalent bodies in other countries will scrutinize collision-avoidance architecture rigorously.
Governance and geopolitics: A private lunar manufacturing base operated by a U.S. company raises profound questions under the Outer Space Treaty of 1967, which prohibits national appropriation of the Moon but is silent on private resource extraction. The legal framework is evolving, and SpaceX’s first-mover advantage may crystallize before international consensus does — which is precisely what competitors in Beijing are calculating.
The skeptics within the technical community are not wrong to raise these objections. Fortune’s reporting found that while Musk and some bulls argue space-based AI could become cost-effective within a few years, many experts say meaningful scale remains decades away. One COO of a terrestrial data center company put it bluntly: “Putting the servers in orbit is a stupid idea.” But that same Fortune piece noted the counterpoint that carries more historical weight: “You shouldn’t bet against Elon.” In 2002, putting a reusable rocket on a pad in Texas seemed equally stupid. In 2026, it is the global standard for commercial launch.
The IPO and the Economic Stakes
When SpaceX goes public — likely in June 2026, at a valuation that may reach $1.75 trillion — investors will not simply be buying a rocket company. They will be buying a thesis about where computation goes next.
SpaceX generated approximately $16 billion in revenue in 2025 with EBITDA of roughly $7.5 billion, with analysts projecting $23.8 billion in 2026 revenue. The Starlink business unit, with its 9.2 million paying subscribers and near-monopoly on high-performance satellite broadband in dozens of markets, is already functioning as a cash-generative telecommunications utility. The xAI integration adds an AI product layer — Grok and the inference infrastructure behind it — and, more importantly, the strategic rationale for deploying that compute into orbit.
The IPO structure is expected to include dual-class shares, maintaining Musk’s voting control while accessing public capital. Retail investors are reportedly being allocated up to 30 percent of shares — three times the Wall Street standard — a decision that reflects both populist branding and practical recognition that the SpaceX story resonates most powerfully with individuals who have watched it unfold in real time.
For the broader space economy, the public offering has catalytic implications. Morgan Stanley has estimated the total space economy could reach $1 trillion annually by 2040; SpaceX’s IPO will function as a pricing signal for every space-adjacent startup, satellite operator, and launch services competitor in the world.
Future Scenarios: Three Trajectories for the SpaceX AI Moon Strategy
Scenario A — Compressed timeline (2028–2031): Starship achieves full reusability and high cadence by 2028, enabling Artemis IV crewed Moon landing and initial Starlink V3/AI satellite deployment. Lunar base groundbreaking by 2030, first in-situ manufactured AI satellites launched from the Moon by 2031. Combined SpaceX entity becomes the world’s most valuable company by market capitalization, displacing Apple or Nvidia.
Scenario B — Extended timeline (2031–2036): Technical setbacks in Starship development — orbital refueling complexity, heat shield durability, booster cadence — push timelines out by three to five years. AI constellation reaches 100,000 satellites by 2032, lunar manufacturing by 2035. SpaceX remains dominant but faces meaningful competition from Amazon’s Project Kuiper and Blue Origin’s New Glenn.
Scenario C — Regulatory disruption: International coordination on space traffic and lunar governance hardens into binding treaty obligations that constrain private resource extraction and orbital congestion. A major collision event in low Earth orbit triggers FCC and ITU responses that throttle the AI satellite constellation before it reaches scale. SpaceX pivots toward terrestrial AI infrastructure, leveraging xAI’s software capabilities rather than its orbital ambitions.
Most analysts consider Scenario B the base case. Scenario A, as SpaceX’s history suggests, cannot be dismissed. Scenario C is the risk that neither Shotwell nor any investor in SpaceX’s IPO fully prices in.
FAQ: SpaceX AI on the Moon and Orbital Data Centers
What exactly are SpaceX’s AI satellites? SpaceX has filed with the FCC for licensing to operate up to one million AI satellites in orbit. These are not traditional communications satellites — they are designed to function as distributed computing nodes, essentially data centers in space. Each satellite would generate power from solar arrays, run AI inference workloads, and radiate waste heat passively into the cold of space. They are designed to circumvent the energy and cooling crises that are constraining terrestrial AI infrastructure.
Why is SpaceX planning to manufacture satellites on the Moon? The Moon’s gravitational pull is approximately one-sixth of Earth’s. Launching a satellite from the lunar surface requires dramatically less energy than lifting an equivalent payload from Earth. If satellites can be built from materials mined on the Moon — silica for semiconductors, aluminum and titanium for structures, oxygen for propellant — and launched via electromagnetic mass drivers, the cost per satellite could fall by an order of magnitude compared to Earth-based production.
What is the SpaceX-xAI merger and why does it matter? In February 2026, SpaceX completed an all-stock acquisition of xAI, Elon Musk’s AI company, in a deal valued at $1.25 trillion — the largest private merger in history. The combination links SpaceX’s launch vehicles and satellite infrastructure with xAI’s Grok language models and AI research. The stated goal is to build space-based AI infrastructure: orbital data centers powered by the SpaceX launch system and running xAI software.
When will humans return to the Moon, and what role does SpaceX play? SpaceX’s Starship is the designated Human Landing System for NASA’s Artemis IV mission, targeting a crewed lunar landing in early 2028. Shotwell has publicly committed to this timeline, stating the 18 Starships currently in production at Starbase need to have flown “long before then.”
Is Gwynne Shotwell the most important person in the space industry? She is arguably the most consequential. While Elon Musk provides the strategic vision and the public narrative, Shotwell has been the operational architect of SpaceX for nearly 24 years — building the commercial manifest, managing regulatory relationships across five federal agencies and dozens of governments, scaling Starlink from concept to 9 million subscribers, and now integrating xAI into a $1.75-trillion pre-IPO enterprise. NASA’s own administrator has called her “excellent.” The industry does not disagree.
The Next Industrial Revolution Will Be Launched from Texas
In the long sweep of economic history, there are moments when the physical location of industrial production shifts so fundamentally that the old maps become useless. The textile mills moved from cottage to factory. Steel moved from forge to blast furnace. Computing moved from mainframe to server farm. Each transition concentrated wealth, reshaped geopolitics, and rendered the previous infrastructure obsolete within a generation.
What Gwynne Shotwell is building — methodically, incrementally, from a factory floor in South Texas — is the infrastructure for a transition of equivalent magnitude. If the AI satellites fly, if the orbital data centers come online, if the lunar manufacturing base is established before Beijing’s equivalent program achieves the same, then the question of where artificial intelligence lives — where it is powered, where it is cooled, where it is built — will have been answered by a woman from a small town in northern Illinois who once convinced a young engineer that his rocket company needed someone to sell it to the world.
She was right then. The next two decades will reveal whether she is right about everything else. The odds, surveyed from a catwalk above eighteen half-built Starships on a Texas factory floor, look better than anyone outside that building has yet fully understood.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
Analysis
Is Anthropic Protecting the Internet — or Its Own Empire?
Anthropic Mythos, the most powerful AI model any lab has ever disclosed, arrived this week draped in the language of altruism. Project Glasswing — the initiative through which a curated circle of Silicon Valley aristocrats gains exclusive access to Mythos — is pitched as an act of civilizational defense. The framing is elegant, the mission is genuinely urgent, and at least part of it is true. But behind the Mythos AI release lies a second story that Dario Amodei’s beautifully worded blog posts conspicuously omit: Mythos is enterprise-only not merely because Anthropic fears hackers, but because releasing it to the open internet would trigger the single greatest act of industrial-scale capability theft in the history of technology. The cybersecurity rationale is real. The economic motive is realer still. Understanding both is how you understand the AI industry in 2026.
What Anthropic Mythos Actually Does — and Why It Terrified Silicon Valley
To appreciate the gatekeeping, you must first reckon with the capability. Mythos is not an incremental model. It occupies an entirely new tier in Anthropic’s architecture — internally designated Copybara — sitting above the public Haiku, Sonnet, and Opus hierarchy that most developers work with. SecurityWeek’s detailed technical breakdown describes it as a step change so pronounced that calling it an “upgrade” is like calling the internet an “improvement” on the fax machine.
The numbers are staggering. Anthropic’s own Frontier Red Team blog reports that Mythos autonomously reproduced known vulnerabilities and generated working proof-of-concept exploits on its very first attempt in 83.1% of cases. Its predecessor, Opus 4.6, managed that feat almost never — near-0% success rates on autonomous exploit development. Engineers with zero formal security training now tell colleagues of waking up to complete, working exploits they’d asked the model to develop overnight, entirely without intervention. One test revealed a 27-year-old bug lurking inside OpenBSD — an operating system historically celebrated for its security — that would allow any attacker to remotely crash any machine running it. Axios reported that Mythos found bugs in every major operating system and every major web browser, and that its Linux kernel analysis produced a chain of vulnerabilities that, strung together autonomously, would hand an attacker complete root control of any Linux system.
Compare that to Opus 4.6, which found roughly 500 zero-days in open-source software — itself a remarkable achievement. Mythos found thousands in a matter of weeks. It then attempted to exploit Firefox’s JavaScript engine and succeeded 181 times, compared to twice for Opus 4.6.
This is also, importantly, what a Claude Mythos vs open source cybersecurity comparison looks like at full resolution: no freely available model comes remotely close, and Anthropic knows it. That gap is the entire product.
The Official Narrative: “We’re Protecting the Internet”
The Anthropic enterprise-only AI decision is framed through Project Glasswing as a coordinated defensive effort — an attempt to patch the world’s most critical software before capability equivalents proliferate to hostile actors. Anthropic’s official Glasswing page commits $100 million in usage credits and $4 million in direct donations to open-source security organizations, with founding partners that read like a geopolitical alliance: Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, and Palo Alto Networks. Roughly 40 additional organizations maintaining critical software infrastructure also gain access. The initiative’s name — Glasswing, after a butterfly whose transparency makes it nearly invisible — is a metaphor for software vulnerabilities that hide in plain sight.
The security rationale for why Anthropic limited Mythos is not confected. In September 2025, a Chinese state-sponsored threat actor used earlier Claude models in what SecurityWeek documented as the first confirmed AI-orchestrated cyber espionage campaign — not merely using AI as an advisor but deploying it agentically to execute attacks against roughly 30 organizations. If that was possible with Claude’s then-current models, what becomes possible with a model that autonomously chains Linux kernel exploits at a near-perfect success rate?
Anthropic’s Logan Graham, head of the Frontier Red Team, captured the threat succinctly: imagine this level of capability in the hands of Iran in a hot war, or Russia as it attempts to degrade Ukrainian infrastructure. That is not science fiction. It is the calculus driving the controlled release. Briefings to CISA, the Commerce Department, and the Center for AI Standards and Innovation are real, however conspicuously absent the Pentagon remains from those conversations — a pointed omission given Anthropic’s ongoing legal war with the Defense Department over its blacklisting.
So yes: the security case is genuine. But it is, at most, half the story.
The Distillation Flywheel: Why Frontier Labs Are Really Gating Their Best Models
Here is the economic argument that no TechCrunch brief or Bloomberg data point has assembled cleanly: Anthropic model distillation is an existential threat to the frontier lab business model, and Mythos is as much a response to that threat as it is a cybersecurity initiative.
The mathematics of adversarial distillation are brutally asymmetric. Training a frontier model costs approximately $1 billion in compute. Successfully distilling it into a competitive student model costs an adversary somewhere between $100,000 and $200,000 — a 5,000-to-one cost advantage in the favor of the copier. No rate-limiting policy, no terms-of-service clause, and no click-through agreement closes that gap. The only defense is controlling access to the teacher in the first place.
Frontier lab distillation blocking is not a new concern, but 2026 has given it terrifying specificity. Anthropic publicly disclosed in February that three Chinese AI laboratories — DeepSeek, Moonshot AI, and MiniMax — collectively generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts. MiniMax alone accounted for 13 million of those exchanges; Moonshot AI added 3.4 million; DeepSeek, notably, needed only 150,000 because it was targeting something far more specific: how Claude refuses things — alignment behavior, policy-sensitive responses, the invisible architecture of safety. A stripped copy of a frontier model without its alignment training, deployed at nation-state scale for disinformation or surveillance, is the nightmare scenario that animated Anthropic’s founding. It may now be unfolding in real time.
What does this have to do with Mythos being enterprise-only? Everything. A model that autonomously writes working exploits for every major OS would, if released via standard API access, provide Chinese distillation campaigns with not just conversational capability but offensive cyber capability — the very thing that makes Mythos commercially unique. Releasing Mythos at scale would be, simultaneously, the greatest act of market self-destruction and the greatest gift to adversarial state actors in the history of enterprise software. Enterprise-only access eliminates both risks at once: it monetizes the capability at maximum margin while denying it to the distillation ecosystem.
This is the distillation flywheel in action. Frontier labs gate the highest-capability models behind enterprise contracts; enterprises pay premium rates for exclusive capability access; the revenue funds the next generation of training runs; the new model is again too powerful to release openly. Each rotation of the wheel deepens the competitive moat, raises the enterprise price floor, and tightens the grip of the three dominant labs over the global AI stack.
Geopolitics at the Model Layer: The Three-Lab Alliance and the New AI Cold War
The Mythos security exploits announcement arrived within 24 hours of a Bloomberg-reported development that is arguably more consequential for the global technology order: OpenAI, Anthropic, and Google — three companies that have spent the better part of three years competing to annihilate each other — began sharing adversarial distillation intelligence through the Frontier Model Forum. The cooperation, modeled on how cybersecurity firms exchange threat data, represents the first substantive operational use of the Forum since its 2023 founding.
The breakdown of what each Chinese lab extracted from Claude reveals something remarkable: three entirely different product strategies, fingerprinted through their query patterns. MiniMax vacuumed broadly — generalist capability extraction at scale. Moonshot AI targeted the exact agentic reasoning and computer-use stack that its Kimi product has been marketing since late 2025. DeepSeek, with a comparatively tiny 150,000-exchange footprint, was almost exclusively interested in Claude’s alignment layer — how it handles policy-sensitive queries, how it refuses, how it behaves at the edges. Each lab was essentially reverse-engineering not just a model but a business plan.
The MIT research documented in December 2025 found that GLM-series models identify themselves as Claude approximately half the time when queried through certain paths — behavioral residue of distillation that no fine-tuning has fully scrubbed. US officials estimate the financial toll of this campaign in the billions annually. The Trump administration’s AI Action Plan has already called for a formal inter-industry sharing center, essentially institutionalizing what the labs are now doing informally.
The geopolitical stakes here extend far beyond corporate IP. When DeepSeek released its R1 model in January 2025 — a model widely believed to incorporate distilled knowledge from OpenAI’s infrastructure — it erased nearly $1 trillion from US and European tech stocks in a single trading session. Markets now understand something that policymakers are only beginning to grasp: control over frontier AI model capabilities is a form of strategic leverage, and distillation is a vector for transferring that leverage without a single line of export-controlled chip silicon crossing a border.
Enterprise Contracts and the New AI Treadmill
The economics of Anthropic enterprise-only AI are becoming increasingly clear as 2026 revenue data enters the public domain.
| Metric | February 2026 | April 2026 |
|---|---|---|
| Anthropic Run-Rate Revenue | $14B | $30B+ |
| Enterprise Share of Revenue | ~80% | ~80% |
| Customers Spending $1M+ Annually | 500 | 1,000+ |
| Claude Code Run-Rate Revenue | $2.5B | Growing rapidly |
| Anthropic Valuation | $380B | ~$500B+ (IPO target) |
| OpenAI Run-Rate Revenue | ~$20B | ~$24-25B |
Sources: CNBC, Anthropic Series G announcement, Sacra
Anthropic’s annualized revenue has now surpassed $30 billion — having started 2025 at roughly $1 billion — representing one of the most dramatic B2B revenue trajectories in the history of enterprise software. Sacra estimates that 80% of that revenue flows from business clients, with enterprise API consumption and reserved-capacity contracts forming the structural backbone. Eight of the Fortune 10 are now Claude customers. Four percent of all public GitHub commits are now authored by Claude Code.
What Project Glasswing does, in this context, is elegant: it creates a new category of enterprise relationship — not API access, not subscription, but strategic partnership with a frontier safety lab deploying the world’s most capable unrestricted model. The 40 organizations in the Glasswing program are not merely beta testers. They are, from a revenue architecture standpoint, being trained — habituated to Mythos-class capability before it becomes generally available, embedded in their security workflows, their CI/CD pipelines, their vulnerability management systems. By the time Mythos-class models are released at scale with appropriate safeguards, the switching cost will be prohibitive.
This is the AI treadmill: each generation of frontier capability, released exclusively to enterprise partners first, creates a loyalty layer that commoditized open-source alternatives cannot easily displace. The $100 million in Glasswing credits is not charity. It is customer acquisition at an unprecedented model tier.
The Counter-View: Responsible Deployment Has a Principled Case
It would be intellectually dishonest to leave the distillation-flywheel critique standing without challenge. The counter-argument is real, and it deserves full articulation.
Platformer’s analysis makes the most compelling version of the responsible-rollout defense: Anthropic’s founding premise was that a safety-focused lab should be the first to encounter the most dangerous capabilities, so it could lead mitigation rather than react to catastrophe. With Mythos, that appears to be exactly what is happening. The company did not race to monetize these cybersecurity capabilities. It briefed government agencies, convened a defensive consortium, committed $4 million to open-source security projects, and staged rollout behind a coordinated patching effort. The vulnerabilities Mythos found in Firefox, Linux, and OpenBSD are being disclosed and patched before the paper trail of their discovery becomes public — precisely the protocol that responsible security research demands.
Alex Stamos, whose expertise in adversarial security spans decades, offered the optimistic framing: if Mythos represents being “one step past human capabilities,” there is a finite pool of ancient flaws that can now be systematically found and fixed, potentially producing software infrastructure more fundamentally secure than anything achievable through traditional auditing. That is not corporate spin. It is a coherent theory of defensive AI benefit.
The Mythos AI release strategy also reflects a genuinely novel regulatory challenge: the EU AI Act’s next enforcement phase takes effect August 2, 2026, introducing incident-reporting obligations and penalties of up to 3% of global revenue for high-risk AI systems. A general release of Mythos into that environment — without governance infrastructure in place — would be commercially catastrophic as well as potentially harmful. Enterprise-gated release buys time for both the regulatory and technical scaffolding to mature.
What Regulators and Open-Source Advocates Must Do Next
The policy implications of Anthropic Mythos extend far beyond one company’s release strategy. They illuminate a structural shift in how frontier AI capability is being distributed — and by whom, and to whom.
For regulators, the Glasswing model raises questions that existing frameworks cannot answer. If a private company now possesses working zero-day exploits for virtually every major software system on earth — as Kelsey Piper pointedly observed — what obligations of disclosure and oversight apply? The fact that Anthropic is briefing CISA and the Center for AI Standards and Innovation is encouraging, but voluntary briefings are not governance. The EU’s AI Act and the US AI Action Plan both need explicit provisions covering what happens when a commercially controlled lab becomes the de facto custodian of the world’s most significant vulnerability database.
For open-source advocates, the distillation dynamic poses an existential dilemma. The same economic logic that drives labs to gate Mythos also drives them to resist open-weights releases of any model that approaches frontier capability. The three-lab alliance against Chinese distillation is, viewed from a certain angle, also an alliance against open-source proliferation of frontier capability — regardless of the nationality of the developer doing the distilling. Open-source foundations, university research labs, and sovereign AI initiatives in Europe, the Middle East, and South Asia should be pressing hard for access frameworks that allow defensive cybersecurity use of frontier capability without being filtered through the commercial relationships of Silicon Valley.
For enterprise decision-makers, the message is unambiguous: the organizations that embed Mythos-class capability into their vulnerability management workflows now will hold a structural security advantage — measured in patch latency and zero-day coverage — over those that wait for open-source equivalents. But that advantage comes with dependency on a single private entity whose political entanglements, from Pentagon disputes to Chinese state-actor confrontations, introduce supply-chain risks that no CISO should ignore.
Anthropic may well be protecting the internet. It is certainly protecting its empire. In 2026, those two imperatives have become so entangled that distinguishing them may be the most important work left for anyone who cares about who controls the infrastructure of the digital world.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
AI
Anthropic Rolls Out Its Most Powerful Cyber AI Model — Days After Leaking Its Own Source Code
The launch of Claude Mythos Preview and Project Glasswing, mere days after Anthropic accidentally exposed 512,000 lines of its core product’s source code to the world, is either the most audacious act of strategic redirection in Silicon Valley history — or the most revealing window yet into the contradictions at the heart of frontier AI development.
There is a particular species of Silicon Valley irony that only manifests at the very frontier of technological ambition. On March 31st, 2026, an Anthropic employee made a mistake so elementary it would embarrass a first-year computer science undergraduate: a debug source map file was accidentally bundled into a public software release, pointing to a cloud-hosted archive of the company’s most commercially prized product — the source code of Claude Code, its flagship agentic coding assistant. Within hours, 512,000 lines of proprietary TypeScript code, across 1,906 files, were mirrored, forked, and torrent-distributed across the internet, never to be recalled. The repository on GitHub was forked more than 41,500 times before Anthropic could blink. Then, seven days later, Anthropic announced the most capable AI model it has ever built — a cybersecurity behemoth called Claude Mythos Preview — and launched Project Glasswing, a sweeping initiative to secure the world’s critical digital infrastructure. The company publicly described it as a watershed for global security. A watching world could be forgiven for raising an eyebrow.
History rarely serves up irony quite this rich. The firm that accidentally handed a blueprint of its proprietary agent harness to thousands of developers, threat actors, and competitors — the firm that inadvertently revealed the internal codename of its most powerful unreleased model buried in that same code — emerged days later as the standard-bearer for a new era of AI-powered cyber defence. It is, depending on your interpretation, either a masterclass in narrative control or a deeply unsettling indicator of the structural tensions now embedded in the development of frontier AI.
I. A Double Embarrassment: The Anatomy of the Leak
The facts of the Anthropic source code leak are simultaneously mundane and extraordinary. On the morning of March 31st, 2026, Anthropic pushed version 2.1.88 of its @anthropic-ai/claude-code package to the npm public registry. Buried inside was a 59.8-megabyte JavaScript source map file — a developer debugging tool that, when followed to its reference URL on Anthropic’s own Cloudflare R2 storage bucket, yielded a downloadable zip archive of the complete, unobfuscated TypeScript source for Claude Code.
Security researcher Chaofan Shou, an intern at Solayer Labs, spotted the exposure at 4:23 AM Eastern and posted a direct download link on X. It was, as The Register reported, “a mistake as bad as leaving a map file in a publish configuration” — a single misconfigured .npmignore field. A known bug in Bun, the JavaScript runtime Anthropic had acquired in late 2025, had been causing source maps to ship in production builds for twenty days before the incident. Nobody caught it.
This was, in fact, the second major accidental disclosure of the month. Days earlier, Fortune had reported on a separate leak of nearly 3,000 files from a misconfigured content management system — including a draft blog post describing a forthcoming model described internally as “by far the most powerful AI model” Anthropic had ever developed. That model’s codename: Mythos. Also, apparently: Capybara.
The March–April 2026 Anthropic Disclosure Timeline
| Date | Event |
|---|---|
| ~Late March 2026 | Fortune reports on ~3,000 leaked CMS files; first public confirmation of the Mythos model’s existence and capabilities. |
| March 31, 2026 | Claude Code v2.1.88 ships to npm with embedded source map; 512,000 lines of TypeScript exposed within hours. GitHub repository forked 41,500+ times. |
| March 31 – April 6 | Anthropic issues DMCA takedowns; threat actors seed trojanized forks with backdoors and cryptominers. Axios supply-chain attack occurs simultaneously. |
| April 7, 2026 | Anthropic officially announces Claude Mythos Preview and Project Glasswing. Partners include Apple, Microsoft, Google, Amazon, JPMorgan Chase, and others. |
What the leaked source revealed was considerable: 44 hidden feature flags for unshipped capabilities, a sophisticated three-layer memory architecture, the internal orchestration logic for autonomous “daemon mode” background agents, and — critically — confirmation that a model called Capybara was actively being readied for launch. The VentureBeat analysis noted that Claude Code had achieved an annualised recurring revenue run rate of $2.5 billion by March 2026, making the intellectual property exposure a genuinely material event for a company preparing to go public.
II. Claude Mythos Preview and Project Glasswing: A Technical Step-Change
To understand why the timing of the Mythos announcement matters, one must first grasp the scale of what Anthropic is claiming. Claude Mythos Preview is not a marginal improvement on its predecessors. It occupies, in Anthropic’s internal taxonomy, a fourth tier entirely above the existing Haiku–Sonnet–Opus range — a tier the company internally designates “Copybara.” According to SecurityWeek, it represents “not an incremental improvement but a step change in performance.”
The headline claim is breathtaking in its scope. In the weeks prior to the public announcement, Anthropic ran Mythos against real open-source codebases and, according to its own Project Glasswing announcement, the model identified thousands of zero-day vulnerabilities — flaws previously unknown to software maintainers — across every major operating system and every major web browser. The oldest vulnerability it uncovered was a 27-year-old bug in OpenBSD, a system famous for its security record. A 16-year-old flaw in video processing software survived five million automated test attempts before Mythos found it in a matter of hours. The model autonomously chained together a series of Linux kernel vulnerabilities into a privilege escalation exploit — the kind of attack chain that would previously have required a sophisticated, nation-state-grade human research team.
A single AI agent could scan for vulnerabilities and potentially take advantage of them faster and more persistently than hundreds of human hackers — and similar capabilities will be available across the industry in as little as six months.
The Axios reporting on the rollout puts the dual-use risk with uncomfortable clarity: Mythos is “extremely autonomous” and possesses the reasoning capabilities of an advanced security researcher, capable of finding “tens of thousands of vulnerabilities” that even elite human bug hunters would miss. This is precisely why Anthropic chose not to release it publicly. Instead, Project Glasswing gives curated preview access to 40-plus organisations responsible for critical software infrastructure — including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks — backed by up to $100 million in usage credits and $4 million in direct donations to open-source security organisations including the Apache Software Foundation and OpenSSF.
The model is not cybersecurity-specific. CNBC noted that Mythos’s cyber prowess is a downstream consequence of its exceptional general-purpose coding and reasoning capabilities — a distinction with profound regulatory implications. You cannot restrict a model trained to think brilliantly about code from thinking brilliantly about vulnerabilities in that code.
III. The Deeper Meaning: Irony, Competence, and the New Security Paradigm
The central paradox demands direct engagement: Anthropic, a company whose founding proposition is responsible AI development, leaked its own product’s source code through a packaging error so elementary it required no sophistication to exploit. It then, within the same news cycle, announced an AI model so powerful its own CEO fears its public release — and positioned itself as the primary steward of global cyber defence. One is entitled to hold both thoughts simultaneously.
And yet the strategic coherence of the Mythos launch, viewed against the backdrop of the leak, is hard to dismiss entirely. Anthropic did not choose the timing. The Mythos project had been in development and partner testing for weeks before the Claude Code source code escaped its containment. But the company, having already suffered the reputational bruise of one accidental exposure too many, had an imperative to seize the narrative — to move from embarrassed leaker to principled guardian, rapidly. The result is a masterclass in what crisis communications professionals call “agenda replacement.”
The deeper issue, however, is structural and it transcends any single company. The Axios assessment is stark: Mythos is “the first AI model that officials believe is capable of bringing down a Fortune 100 company, crippling swaths of the internet or penetrating vital national defense systems.” Meanwhile, the head of Anthropic’s frontier red team, Logan Graham, told multiple outlets that comparable capabilities will be in the hands of the broader AI industry within six to eighteen months — from every nation with frontier ambitions, not just the United States. The window for getting ahead of this threat is not a decade. It is, at most, a year.
What the Mythos launch crystallises is a principle that the cybersecurity community has long understood but that corporate AI leaders and policymakers have been reluctant to internalise: the same model property that makes an AI system valuable for defence makes it catastrophically useful for offence. The technical writeup on Anthropic’s red team blog makes this explicit. Mythos can “reverse-engineer exploits on closed-source software” and turn known-but-unpatched vulnerabilities into working exploits. Gadi Evron, founder of AI security firm Knostic, told CNN that “attack capabilities are available to attackers and defenders both, and defenders must use them if they’re to keep up.” There is no asymmetry available — only the question of who moves first.
IV. The Geopolitical and Regulatory Reckoning
The implications of Anthropic Mythos extend well beyond corporate strategy. The U.S.-China AI competition has already entered the domain of active cyber operations. A Chinese state-sponsored group, as Fortune reported, used an earlier Claude model to target approximately 30 organisations in a coordinated espionage campaign before Anthropic detected and curtailed the activity. If a Claude model that predates Mythos by several capability generations was sufficient to mount a significant intelligence operation, the implications of Mythos-class capability in hostile hands are genuinely alarming.
A source briefed on Mythos told Axios: “An enemy could reach out and touch us in a way they can’t or won’t with kinetic operations. For most Americans, a conventional conflict is ‘over there.’ With a cyberattack, it’s right here.” This framing matters. The doctrine of nuclear deterrence rested partly on the difficulty of acquisition. The doctrine of cyber deterrence in the Mythos era rests on nothing — the marginal cost of deploying AI-accelerated attack capability approaches zero for any state or non-state actor with API access to a comparable model.
Anthropic’s relationship with Washington is, to put it diplomatically, complicated. The company is simultaneously briefing the Cybersecurity and Infrastructure Security Agency, the Commerce Department, and senior officials across the federal government on Mythos’s capabilities — while locked in active litigation with the Pentagon, which has labelled Anthropic a supply-chain risk following the company’s refusal to permit autonomous targeting or battlefield surveillance applications. The AI safety firm that declined to arm American drones is now, in the same breath, offering American critical infrastructure a first-mover advantage against AI-powered adversaries. The philosophical coherence of this position is defensible; its political navigation will be considerably harder.
For regulators, the Mythos announcement poses a question for which existing frameworks have no satisfying answer. The EU AI Act’s tiered risk classifications were not designed for a model that is simultaneously a breakthrough productivity tool, a national security asset, and a potential weapon of mass cyber-disruption. The Project Glasswing model — voluntary, industry-led, access-gated — is a plausible short-term mechanism. It is not a durable regulatory framework. And as Logan Graham made clear, the window before other frontier labs — and the Chinese state — reach comparable capability is measured in months, not years.
V. Verdict: A Reckoning Dressed as a Launch
Editorial Assessment
The Mythos announcement is not primarily a product launch. It is a reckoning — one that Anthropic has had the narrative dexterity to package as a strategic initiative rather than a confession. The source code leak was, at the level of operational security, an embarrassment of the first order. But it was also, unintentionally, a proof of concept for the vulnerability landscape that Mythos was built to address. Anthropic’s own systems failed a test far simpler than any that Mythos could conceivably pose to a determined adversary.
That irony is not merely cosmetic. It is instructive. No organisation — not even a frontier AI lab whose entire value proposition rests on the responsible management of powerful systems — is immune to the mundane failure modes of human error, toolchain misconfiguration, and the accumulated technical debt of moving too fast. The question is not whether Anthropic can be trusted with Mythos. The question is whether any institution, in any country, is structurally capable of managing the governance of AI capabilities that are advancing faster than the legal and regulatory architectures designed to contain them.
Dario Amodei framed the Project Glasswing rollout as an opportunity to “create a fundamentally more secure internet and world than we had before the advent of AI-powered cyber capabilities.” This is not rhetorical excess. It is, technically, accurate: the same capability that can chain together a 27-year-old kernel vulnerability into a privilege escalation exploit can, in the hands of defenders, systematically eliminate such vulnerabilities from the world’s most important software. The question is not whether this technology is transformative. It is whether the institutional infrastructure required to ensure that transformation benefits defenders more than attackers can be assembled in the time available.
Six months. Eighteen at the outside. That is the horizon Logan Graham has placed on the proliferation of Mythos-class capabilities across the industry. The global financial cost of cybercrime already runs to an estimated $500 billion annually, a figure that was compiled before any model approached Mythos’s level of autonomous vulnerability discovery. Policymakers in Washington, Brussels, and Beijing who are not currently treating this as an emergency are, as one source briefed on Mythos told Axios with commendable directness, “not remotely ready.”
Anthropic rolled out its most powerful cyber AI model days after leaking its own source code. The irony is real. So is the threat. And so, potentially, is the opportunity — if the institutions responsible for governing it can move at the speed the technology demands, rather than the speed at which governments customarily prefer to operate. History suggests that gap will be considerable. The Mythos timeline suggests that gap may, for once, be decisive.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
AI
Perplexity’s $450M Pivot Changes Everything
Perplexity’s ARR surged past $450M in March 2026 after a 50% monthly jump, driven by its AI agent “Computer.” Here’s what this pivot means for Google, OpenAI, and the future of the internet.
How a search upstart quietly rewired the economics of AI — and why the rest of Silicon Valley should be paying very close attention
There is a phrase that haunts every incumbent technology company: silent pivot. Not the public declaration of reinvention, draped in keynote slides and press releases, but the quiet moment when a company stops doing the thing you thought it did — and starts doing the thing that will eventually eat you alive.
Perplexity AI has just executed one of those pivots. And the numbers suggest it is working with a speed that should alarm everyone from Mountain View to Redmond.
Perplexity’s estimated annual recurring revenue rose to more than $450 million in March, after the launch of a new agent tool and a shift to usage-based pricing. Investing.com That figure represents a 50% jump in a single month — a rate of acceleration that, even in an industry accustomed to hyperbolic growth curves, demands serious analytical attention. This is not a company finding its feet in a niche. This is a company stepping onto a stage it intends to own.
From Answers to Actions: What “Computer” Actually Changes
To understand why this revenue surge matters, you need to understand what Perplexity has actually built — and why it is architecturally different from everything that came before it.
On February 25, 2026, Perplexity launched “Computer,” a multi-model AI agent that coordinates 19 different AI models to complete complex, multi-step workflows entirely in the background. This is not another chat tool that produces quick answers — it is a full-blown agentic AI system, a digital worker that takes a user’s goal, breaks it into steps, spins up specialized sub-agents, and keeps running until the job is done. Build Fast with AIMedium
The strategic architecture here is genuinely novel. Computer functions as what Perplexity describes as “a general-purpose digital worker” — a system that accepts a high-level objective, decomposes it into subtasks, and delegates those subtasks to whichever AI model is best suited for each one. VentureBeat Anthropic’s Claude Opus 4.6 serves as the core reasoning engine. Google’s Gemini handles deep research. OpenAI’s GPT-5.2 manages long-context recall. Each sub-task routes to the best available model, automatically.
This is not a feature. It is a philosophy — and the philosophy has a name: model-agnostic orchestration. Perplexity is betting that no single AI provider will dominate every cognitive capability, and that the company best positioned to win the next decade is the one that can route across all of them intelligently.
The bet appears to be paying off. Perplexity’s own internal data supports this thesis: the company’s enterprise usage shifted dramatically over the past year, from 90% of queries routing to just two models in January 2025, to no single model commanding more than 25% of usage by December 2025. VentureBeat
The Pricing Revolution Hidden Inside the Revenue Story
It would be tempting to read the $450 million ARR headline as a simple user-growth story. It is not. The more consequential development is what Perplexity has done to its pricing architecture — and the implications that has for the entire AI industry’s business model.
The $200 monthly Max tier includes the Computer agent itself, 10,000 monthly credits, unlimited Pro searches, access to advanced models including GPT-5.2 and Claude Opus 4.6, Sora 2 Pro video generation, the Comet AI browser, and unlimited Labs usage. SentiSight.ai At the enterprise tier, the price rises to $325 per seat per month.
This is usage-based pricing in its most sophisticated form — not a flat subscription for access, but a credit system that scales revenue with the actual work performed. The economic logic is powerful: the more value an agent delivers, the more credits it consumes, and the more the customer pays. Revenue becomes proportional to outcomes, not to logins.
This represents a fundamental rupture with the advertising model that has funded the internet for three decades. Google monetizes attention. Perplexity is building a business that monetizes completion — the successful execution of a task. These are not subtle variants of the same model. They are philosophically opposed.
Perplexity has significantly expanded its pricing structure in 2026, with the platform now spanning five subscription tiers — Free, Pro, Max, Enterprise Pro, and Enterprise Max — alongside a developer API ecosystem that includes the Sonar API, Search API, and the newer Agentic Research API. Finout The Agentic Research API, in particular, positions Perplexity not just as a consumer product but as foundational AI infrastructure for any developer who wants to build on top of agent-grade search.
The Google Problem, Sharpened
Search incumbency has always been more durable than technologists predicted, for a simple reason: the switching cost for a behavior performed forty times a day is enormous. Perplexity, in its original form as an “answer engine,” was trying to change a habit. Now it is trying to eliminate a category.
When a Perplexity agent builds you a Bloomberg Terminal-style financial dashboard from scratch, or automates a full content production workflow over three days without requiring a single manual search query, the question of whether it is “better than Google” becomes irrelevant. The agent is doing something Google was never designed to do. It is not competing for your search box. It is competing for your workday.
Perplexity now has more than 100 million monthly active users from its search and agent tools, including tens of thousands of enterprise clients. Investing.com That enterprise penetration is the telling number. Consumer search habits die slowly; enterprise procurement cycles move when ROI is demonstrable. The fact that enterprise customers are already embedding Perplexity’s agents into production workflows suggests the value proposition has moved well beyond novelty.
More than 100 enterprise customers contacted Perplexity over a single weekend demanding access after seeing early user demonstrations on social media — users on social media demonstrated the agent building Bloomberg Terminal-style financial dashboards, replacing six-figure marketing tool stacks in a single weekend, and automating workflows that previously required dedicated teams. VentureBeat
That is not a product demo going viral. That is product-market fit, documented in real time.
Competitive Positioning: Where Perplexity Sits in the New AI Stack
The $450 million ARR figure needs to be read against the broader competitive landscape — and here, the picture becomes more interesting, and more dangerous for Perplexity’s rivals.
OpenAI’s Operator and Anthropic’s Claude Cowork both represent agent-layer ambitions from the model providers themselves. Microsoft Copilot brings enterprise distribution at a scale Perplexity cannot match organically. Google’s own agentic ambitions are embedded across its entire product surface. Against this array of well-resourced competitors, Perplexity’s advantages are specific and worth understanding precisely.
First: model neutrality. Neither OpenAI nor Google will ever build a genuine orchestration layer that routes work to a competitor’s model. Perplexity has no such constraint. Its Computer agent already orchestrates Claude, GPT, Gemini, Grok, and others simultaneously. For enterprises that want best-of-breed reasoning rather than vendor lock-in, that neutrality is structurally valuable.
Second: search heritage. Perplexity now serves about 30 million monthly users and processed 780 million queries in May 2025 — more than 20% month-over-month growth — feeding a data flywheel that sharpens search relevance and agent targeting. Sacra Every query is a training signal. An agent that understands how real professionals actually search has a compounding advantage over agents that are parachuted in from a model laboratory.
Third: distribution velocity. Sacra projected Perplexity would reach $656 million in ARR by the end of 2026 Sacra — a target that now looks not just achievable but potentially conservative, given the March surge to $450 million. The question is no longer whether Perplexity can scale. It is whether it can maintain pricing power as competitors intensify.
The Publisher Dimension: A Redistribution of Value Worth Watching
One underreported dimension of the Perplexity story is its relationship with the media and publishing ecosystem — a relationship that has been contentious, but is evolving in ways that may prove prescient.
Publishers have, with some justification, worried that AI search engines extract the value of their journalism without adequately compensating them. Perplexity has responded with a revenue-sharing program and formal content partnerships, signaling an intent to build an ecosystem rather than simply scrape one.
Perplexity announced a $42.5 million fund to share AI search revenue with publishers, reflecting an investment in ecosystem partnerships. Blogs If agentic AI becomes the dominant interface through which people consume information and execute tasks, the entity that controls the citation layer — the sourcing infrastructure of AI outputs — will hold extraordinary leverage. Perplexity is positioning itself as that entity’s steward.
This is an audacious bet. It may also be a necessary one. A sustainable AI search economy requires content creators to keep creating. A company that figures out how to share value equitably with its content suppliers will have a structural advantage over one that treats the web as a free resource.
The Risks That the Revenue Surge Cannot Hide
Intellectual honesty demands acknowledging what the $450 million figure does not tell us.
The credit-based pricing model, while economically elegant, introduces revenue variability that flat subscriptions do not. Perplexity has not published a per-task credit conversion table — there is no page that says a research task costs X credits, making budgeting difficult for heavy users. Trysliq At the enterprise level, opacity in pricing is a trust problem. CFOs who cannot model their AI spend will negotiate hard caps or find vendors who offer predictability.
There is also the trust question that underlies Perplexity’s entire enterprise push. The company is three years old and asking chief information security officers to route sensitive Snowflake data, legal contracts, and proprietary business intelligence through its platform. VentureBeat In highly regulated industries — finance, healthcare, law — that ask may be a bridge too far in 2026, regardless of the technology’s capability.
And then there is the litigation risk. Amazon filed suit against Perplexity on November 4, 2025, over the startup’s agentic shopping features in the Comet browser, arguing that automated agents must identify themselves and comply with site rules. Sacra As agents begin operating across the open web at scale, the legal frameworks governing their behaviour are still being written. The company moving fastest is also the one most exposed to adverse precedent.
The Bigger Question: Is This the Moment AI Agents Become the New Interface?
Strip away the funding rounds, the valuation multiples, and the competitive posturing, and the Perplexity story is really about a single hypothesis: that the next dominant interface for human-computer interaction will not be a search box, a browser, or a chat window. It will be a goal.
You describe an outcome. The agent handles everything else.
A February 2026 survey by CrewAI found that 100% of surveyed enterprises plan to expand their use of agentic AI this year, with 65% already using AI agents in production and organizations reporting they have automated an average of 31% of their workflows. Fortune Business Insights projects the global agentic AI market will grow from $9.14 billion in 2026 to $139 billion by 2034. VentureBeat
Those numbers should not be taken as gospel — market projection firms have a well-documented tendency to extrapolate peak enthusiasm into perpendicular lines on a chart. But the directional signal is clear. Enterprises are not experimenting with agents. They are deploying them.
Perplexity’s 50% monthly revenue jump is, on one reading, a company hitting a product-market fit inflection point. On a larger reading, it is a leading indicator of an industry-wide shift in how organizations will structure cognitive work. When knowledge workers stop searching and start delegating, the companies that built the infrastructure for that delegation will be worth considerably more than their current valuations suggest.
A Quotable Close
The history of technology is punctuated by moments when a product category collapses into a feature — and a feature expands into a platform. The search box was a feature of the browser. The browser became a platform for the web. The web became the substrate for the cloud.
Aravind Srinivas is betting that the agent layer will perform the same architectural alchemy: absorbing search, absorbing browsers, absorbing the application stack above them, and emerging as the new interface through which people and organizations interact with information, services, and each other.
A 50% monthly revenue jump to $450 million is not proof that he is right. But it is the most compelling evidence yet that the bet is live — and that the clock, for every company that still depends on attention as its primary product, has started.
The next billion-dollar question in technology is not “who builds the best AI model?” It is “who builds the best layer between the human and all the models?” Perplexity, right now, has the most credible answer.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
-
Markets & Finance3 months agoTop 15 Stocks for Investment in 2026 in PSX: Your Complete Guide to Pakistan’s Best Investment Opportunities
-
Analysis2 months agoBrazil’s Rare Earth Race: US, EU, and China Compete for Critical Minerals as Tensions Rise
-
Analysis2 months agoTop 10 Stocks for Investment in PSX for Quick Returns in 2026
-
Banks3 months agoBest Investments in Pakistan 2026: Top 10 Low-Price Shares and Long-Term Picks for the PSX
-
Investment3 months agoTop 10 Mutual Fund Managers in Pakistan for Investment in 2026: A Comprehensive Guide for Optimal Returns
-
Global Economy4 months agoPakistan’s Export Goldmine: 10 Game-Changing Markets Where Pakistani Businesses Are Winning Big in 2025
-
Asia3 months agoChina’s 50% Domestic Equipment Rule: The Semiconductor Mandate Reshaping Global Tech
-
Global Economy4 months ago15 Most Lucrative Sectors for Investment in Pakistan: A 2025 Data-Driven Analysis
