AI
China’s Cheap AI Is Designed to Hook the World on Its Tech
Analysis | China’s AI Strategy | Global Technology Review
How China’s low-cost AI models—10 to 20 times cheaper than US equivalents—are quietly building global tech dependence, reshaping the AI race, and challenging American dominance.
In late February 2026, ByteDance unveiled Seedance 2.0, a video-generation model so capable—and so strikingly inexpensive—that it sent tremors through Silicon Valley boardrooms. The timing was no accident. Within days, Anthropic filed a legal complaint alleging that a Chinese national had systematically harvested outputs from Claude to train a rival model, a practice known in the industry as “distillation.” The accusation crystallized what many AI executives had quietly been saying for months: China is not simply competing in artificial intelligence. It is running a fundamentally different play.
The strategy is elegant in its ruthlessness. While American frontier labs—OpenAI, Google DeepMind, Anthropic—compete on the technological frontier, racing to build the most powerful and most expensive models imaginable, China’s leading AI developers are racing in the opposite direction. They are making AI astonishingly cheap, broadly accessible, and deeply entangled in the infrastructure of developing economies. Understanding how cheap AI tools from China compare to American frontier models is not merely a technology question. It is a question about who writes the rules of the next era of the global economy.
| Metric | Figure |
|---|---|
| Chinese AI global market share, late 2025 | 15% (up from 1% in 2023) |
| Cost advantage vs. US equivalents | Up to 20× cheaper |
| Alibaba AI investment commitment through 2027 | $53 billion |
The Sputnik Moment That Changed Everything
When DeepSeek released its R1 reasoning model in January 2025, the reaction in Washington was somewhere between bewilderment and alarm. US officials, accustomed to treating American AI supremacy as a structural given, struggled to explain how a Chinese startup—operating under heavy export restrictions that denied it access to Nvidia’s most advanced chips—had produced a model that matched, or in certain benchmarks exceeded, OpenAI’s o1. Reuters (2025) described the release as “a wake-up call for the US tech industry.”
The label that stuck was borrowed from Cold War history. Investors, policymakers, and researchers began calling DeepSeek’s R1 “a Sputnik moment”—a demonstration that the adversary had capabilities that had been systematically underestimated. The reaction was visceral: Nvidia lost nearly $600 billion in market capitalization in a single trading session. But the deeper implication was not about one model or one company. It was about a method.
“The real disruption isn’t that China built a good model. It’s that China built a cheap model—and cheap changes everything about adoption curves, lock-in, and geopolitical leverage.”
— Senior analyst, Brookings Institution Center for Technology Innovation
DeepSeek’s R1 was trained at an estimated cost of under $6 million, a fraction of what OpenAI reportedly spent on GPT-4. The model was open-sourced, triggering an avalanche of derivative models across Southeast Asia, Latin America, and sub-Saharan Africa. The impact of low-cost Chinese AI on US dominance had moved from hypothetical to measurable. By the fourth quarter of 2025, Chinese AI models had captured approximately 15% of global market share, up from roughly 1% just two years earlier, according to estimates cited by CNBC (2025).
Five Models and Counting: The Pace Accelerates
DeepSeek was only the opening act. Within weeks, five additional significant Chinese AI models had shipped—a pace that surprised even close observers of China’s technology sector. ByteDance’s Doubao and the Seedance family of multimodal models, Alibaba’s Qwen series, Baidu’s ERNIE updates, and Tencent’s Hunyuan collectively constitute what The Economist (2025) termed China’s “AI tigers.”
American labs have pushed back hard. Anthropic’s legal complaint over distillation practices reflects a broader industry concern: that Chinese developers are not merely competing on engineering talent but systematically harvesting the intellectual output of Western models to accelerate their own. The accusation is significant because distillation—training a smaller, cheaper model on the outputs of a larger one—is not illegal in most jurisdictions, but it sits in a legal and ethical gray zone that could reshape how frontier AI outputs are licensed and protected. Chatham House (2025) has observed that the practice “blurs the line between legitimate benchmarking and intellectual property extraction at scale.”
UBS Picks Its Winners
Not all Chinese models are created equal, and sophisticated institutional actors are drawing distinctions. Analysts at UBS, in a widely circulated note from early 2026, indicated a preference for several Chinese models—specifically Alibaba’s Qwen and ByteDance’s Doubao—over DeepSeek for enterprise deployments, citing more consistent performance on structured reasoning tasks and better compliance tooling for regulated industries. The note was striking precisely because it came from a global financial institution with every incentive to avoid geopolitical controversy. The risks of dependence on Chinese AI platforms, apparently, are acceptable to some of the world’s most sophisticated institutional investors when the price differential is this large.
Key Strategic Insights
- China’s cost advantage is structural, not temporary. Priced 10 to 20 times cheaper per API call, the gap reflects architectural innovation, lower energy costs, and in some cases state subsidy—making it durable over time.
- Emerging markets are the primary battleground. In Indonesia, Nigeria, Brazil, and Vietnam, Chinese AI tools have penetrated developer ecosystems faster than US equivalents because local startups and governments simply cannot afford American pricing.
- Open-sourcing is a deliberate geopolitical instrument. By releasing models under permissive licenses, Chinese developers seed global ecosystems with their architectures, creating dependency on Chinese tooling, Chinese fine-tuning expertise, and Chinese cloud infrastructure.
- The distillation controversy signals a new phase. As US labs tighten access and output monitoring, the cat-and-mouse dynamics of knowledge extraction will intensify, potentially reshaping how AI models are licensed globally.
- Hardware self-reliance is advancing faster than anticipated. Cambricon’s revenue surged over 200% in 2025 as domestic chip demand spiked, while Baidu’s Kunlun AI chips are now deployed across major Chinese data centers at scale.
The Comparison Table: US vs. Chinese AI
| Model | Origin | Relative API Cost | Global Reach Strategy | Open Source? | Hardware Dependency |
|---|---|---|---|---|---|
| OpenAI GPT-4o | 🇺🇸 US | Baseline (1×) | Enterprise, developer API; premium pricing | No | Nvidia (Azure) |
| Anthropic Claude 3.5 | 🇺🇸 US | ~0.9× | Safety-focused enterprise; selective access | No | Nvidia (AWS, GCP) |
| Google Gemini Ultra | 🇺🇸 US | ~0.85× | Google ecosystem integration; enterprise cloud | Partial (Gemma) | Google TPUs |
| DeepSeek R1 | 🇨🇳 CN | ~0.05–0.10× | Global open-source seeding; developer ecosystems | Yes | Nvidia H800 / domestic chips |
| Alibaba Qwen 2.5 | 🇨🇳 CN | ~0.07× | Emerging markets via Alibaba Cloud; multilingual | Yes | Alibaba custom silicon |
| ByteDance Doubao / Seedance | 🇨🇳 CN | ~0.06× | Consumer apps; TikTok ecosystem integration | Partial | Mixed (domestic + Nvidia) |
| Baidu ERNIE 4.0 | 🇨🇳 CN | ~0.08× | Government contracts; domestic enterprise | No | Baidu Kunlun chips |
Winning the Hardware War From Behind
No analysis of how China’s cheap AI is creating global tech dependence is complete without confronting the chip question. The Biden and Trump administrations’ export controls—restricting Nvidia’s H100, A100, and subsequent architectures from reaching Chinese buyers—were designed to create a permanent computational ceiling. The assumption was that frontier AI requires frontier silicon, and frontier silicon would remain American. That assumption is under sustained pressure.
Huawei’s Atlas 950 AI training cluster, unveiled in late 2025, represents the most credible challenge yet to Nvidia’s dominance in the Chinese market. Built around Huawei’s Ascend 910C processor, the cluster offers training performance that analysts at the Financial Times (2025) described as “approaching, though not yet matching, Nvidia’s H100 at scale.” More telling is the trajectory. Cambricon Technologies, China’s leading AI chip specialist, reported revenue growth exceeding 200% in fiscal 2025 as domestic AI developers pivoted aggressively to domestic silicon under regulatory pressure and patriotic procurement directives.
Baidu’s Kunlun chip line, meanwhile, is now powering a significant share of the company’s own inference workloads—reducing dependence on imported hardware at the exact moment when US export restrictions are tightening. China’s AI strategy for becoming an economic superpower is not predicated on surpassing American chip technology in the near term. It is predicated on becoming self-sufficient enough to sustain its cost advantage while US competitors remain anchored to expensive, constrained silicon supply chains. Brookings (2025) has noted that “China’s domestic chip ecosystem has advanced by at least two to three years relative to projections made in 2022.”
The Emerging Market Gambit
Silicon Valley’s pricing model was always implicitly designed for Silicon Valley’s clients: well-capitalized Western enterprises with robust cloud budgets and tolerance for compliance complexity. The rest of the world—which is to say, most of the world—was an afterthought. Chinese AI developers recognized this gap and moved into it with precision.
In Vietnam, government agencies have begun piloting Alibaba’s Qwen models for document processing and citizen services, drawn by price points that make comparable US offerings economically untenable for a developing-economy public sector. In Nigeria, startup accelerators report that the majority of AI-native companies in their cohorts are building on Chinese model APIs—not out of ideological preference but because the economics are simply not comparable. Indonesian developers have contributed tens of thousands of fine-tuned model variants to open-source repositories built on DeepSeek and Qwen foundations, creating exactly the kind of community lock-in that platform companies spend billions trying to manufacture.
The implications for tech sovereignty are profound and troubling. As Chatham House (2025) argues, when a country’s critical AI infrastructure is built on a foreign model’s weights, architecture, and increasingly its cloud services, the notion of digital sovereignty becomes largely theoretical. Data flows toward Chinese servers. Fine-tuning expertise clusters around Chinese tooling ecosystems. Regulatory leverage accrues to Beijing.
“Ubiquity is more powerful than superiority. The question is not which AI is best—it is which AI is everywhere.”
Alibaba’s $53 Billion Signal
If there was any residual doubt about the strategic ambition behind China’s AI push, Alibaba’s announcement of a $53 billion AI investment commitment through 2027 should have resolved it. The scale dwarfs most national AI strategies and rivals the combined R&D budgets of several major US technology companies. Critically, the investment is not concentrated in a single prestige project. It is spread across cloud infrastructure, model development, developer tooling, international data centers, and—pointedly—subsidized access programs for emerging-market customers.
This is the architecture of dependency, built deliberately. Offer cheap access. Embed your tools in critical workflows. Build the developer community on your frameworks. Then, when the switching costs are high enough and the alternatives have atrophied from neglect, the pricing conversation changes. It is the playbook that Amazon ran with AWS, that Google ran with Search, and that Microsoft ran with Office—now being executed at geopolitical scale by a state-aligned corporate champion with essentially unlimited political backing. Forbes (2025) characterized the investment as “less a corporate bet than a national infrastructure program wearing a corporate uniform.”
Is China Winning the AI Race?
The question is, in one sense, the wrong question. “Winning” implies a finish line, a moment when one competitor’s supremacy is declared and ratified. Technological competition does not work that way, and the AI race least of all. What China is doing is more subtle and, in the long run, potentially more consequential: it is restructuring the terms of global AI participation in ways that favor Chinese platforms, Chinese architectures, and Chinese geopolitical interests.
On pure technical capability, American frontier labs retain meaningful advantages at the absolute cutting edge. OpenAI’s reasoning models, Google’s multimodal systems, and Anthropic’s safety-focused architectures represent genuine innovations that Chinese competitors are still working to match. The New York Times (2025) noted that US models continue to lead on complex multi-step reasoning and long-context tasks by measurable margins. But capability at the frontier matters far less than capability at the median—at the price point, integration depth, and ecosystem richness that determine what the world actually uses.
China is winning that race. Not through theft or brute force, though allegations of distillation practices suggest the competitive lines are not always clean, but through a coherent, patient, and strategically sophisticated campaign to make Chinese AI the default choice for a world that cannot afford American alternatives. The risks of dependence on Chinese AI platforms—data sovereignty concerns, potential for access interruption under geopolitical pressure, embedded architectural assumptions that may encode specific values—are real and documented. They are also, increasingly, being accepted as the price of access by a world that Western AI pricing has effectively priced out.
History suggests that the technology that becomes ubiquitous becomes infrastructure, and infrastructure becomes power. China’s AI developers have understood this clearly. The rest of the world is just beginning to reckon with what it means.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
Analysis
Agency in the Age of AI: Why Human Initiative — Not Artificial Agents — Will Define the Next Decade
On February 15, 2026, Sam Altman posted two sentences to X that encapsulated a decade of Silicon Valley ambition in a single breath. OpenAI had acquired OpenClaw, an open-source AI agent framework that could autonomously browse, code, and execute complex multi-step tasks — and its creator, Peter Steinberger, was joining the company to “bring agents to everyone.” The deal was quiet by tech-acquisition standards. No press conference. No billion-dollar number dropped to gasps at a conference. Just a pair of tweets that, read carefully, amount to a civilizational declaration: the age of artificial agents — AI systems that act on your behalf, that do rather than merely say — has arrived.
The question no one in those tweets was asking is the one that ought to keep us up at night. Not what will AI agents do for us? But what will they do to us?
Agency in the age of AI is not, at its core, a technology question. It is a human one. And across law firms, accounting houses, actuarial desks, and the laptops of twenty-four-year-olds trying to build careers in knowledge work, the contours of that question are becoming impossible to ignore.
The Rise of Autonomous Agents — And the Hidden Cost to Human Agency
“Agentic AI” is the industry’s term of the moment, and it deserves a plain-language translation: these are AI systems that do not merely answer questions but complete tasks — booking travel, filing documents, auditing spreadsheets, drafting briefs, managing inboxes — with minimal human instruction and, in many configurations, minimal human oversight. OpenAI’s Frontier platform, launched in February 2026 and described as a home for “AI coworkers,” gives enterprises AI systems with shared context, persistent memory, and permissions to act inside live business workflows.
The promise is intoxicating. The average knowledge worker, Silicon Valley’s pitch goes, will soon command a small army of autonomous agents the way a senior partner commands junior associates. Scale your output. Compress your timelines. Democratize expertise.
What this narrative conspicuously omits is what happens to the junior associates.
The hidden cost of autonomous agents is not primarily economic, though the economic costs are real and arriving faster than most forecasts anticipated. It is something harder to quantify and easier to dismiss: the erosion of the conditions under which human agency develops, deepens, and compounds over a life. The young lawyer who never drafts her first clumsy brief. The accountant who never wrestles with his first gnarly audit. The actuary who never builds intuition through the friction of getting it wrong. Agency — the capacity to act, judge, and take meaningful initiative in the world — is not innate. It is cultivated. And the cultivation requires doing the hard, error-prone, occasionally humiliating work that AI agents are now absorbing at scale.
This is not a Luddite argument. It is a developmental one. And it is urgent.
Why Lawyers, Accountants, and Actuaries Are Questioning Their Futures
The conversation has broken into the open in the corridors of professional services with a candor that would have been unthinkable three years ago. Senior partners at major law firms will tell you, off the record, that they have paused or sharply curtailed junior associate hiring. The work that used to season young talent — contract review, discovery, due diligence — is being absorbed by AI agents with an efficiency that makes the economics of junior staffing almost impossible to justify.
The data corroborates what the corridors are whispering. Goldman Sachs Research reported in April 2026 that AI is erasing roughly 16,000 net U.S. jobs per month — approximately 25,000 displaced by AI substitution against 9,000 new positions created by AI augmentation. The occupations most exposed to substitution, Goldman’s economists found, include accountants and auditors, legal and administrative assistants, credit analysts, and telemarketers: precisely the entry-level and mid-career roles that have historically served as the scaffolding of professional development.
The generational impact is particularly sharp. Goldman Sachs found that unemployment among 20- to 30-year-olds in AI-exposed occupations has risen by nearly three percentage points since the start of 2025 — significantly higher than for older workers in the same fields. Entry-level hiring at the top fifteen technology companies fell 25 percent between 2023 and 2024, and continued declining through 2025. The AI-related share of layoffs discussed on S&P 500 earnings calls grew to just above 15 percent by late 2025, up sharply from the year prior.
The career advice for young professionals navigating the AI age in 2026 used to be: develop technical skills, stay adaptable, embrace tools. That advice, while still valid, has become insufficient. What young professionals now face is a more fundamental disruption: the removal of the proving grounds where professional judgment is forged. You cannot develop the discernment of a seasoned litigator if the briefs are always already written. You cannot build the instincts of a skilled auditor if the anomalies are always already flagged.
The global picture adds further texture. In Southeast Asia, AI agents replacing jobs in BPO (business process outsourcing) — a sector employing hundreds of millions across the Philippines, India, and Vietnam — are compressing opportunities for a generation that had, through those very jobs, entered the formal economy and begun building transferable skills. In sub-Saharan Africa, where formal professional employment is expanding and could absorb more talent, the risk is that AI-agent adoption by multinationals shortcircuits the very job categories through which that transition happens. The AI agents replacing lawyers accountants and junior professionals in New York and London do not stay politely within American and European borders.
Pew’s 2025–2026 Data: Americans Demand More Control Over AI
The public has registered its discomfort — clearly, consistently, and in terms that policymakers should find impossible to dismiss.
Pew Research Center’s June 2025 survey of 5,023 U.S. adults found that 50 percent say the increased use of AI in daily life makes them feel more concerned than excited — up from 37 percent in 2021. More than half of respondents (57 percent) rated the societal risks of AI as high, against just 25 percent who say the benefits are similarly high. Majorities reported pessimism about AI’s impact on human creativity (53 percent say it will worsen people’s ability to think creatively) and meaningful relationships (50 percent say it will worsen our capacity to form them).
These are not the views of technophobes. They are the views of citizens watching something happen to their world and struggling to articulate, against the momentum of trillion-dollar valuations and breathless press coverage, what exactly it is they are losing.
The Pew data on control is the most politically significant finding of recent years. Fifty-five percent of U.S. adults say they want more control over how AI is used in their own lives. Among AI experts themselves — people who have built careers in the field — the figure is 57 percent. The demand for human agency in the AI era is not a fringe sentiment or a technophobic reflex. It crosses partisan lines, educational levels, and even the expert-layperson divide. What is remarkable is how little the policy architecture of any major government has responded to it.
In Europe, the EU AI Act has established a framework, but its enforcement mechanisms remain nascent and its treatment of agentic systems is notably underdeveloped for a technology moving at this pace. In the United States, the legislative response has been fragmented, preempted by a political environment in which AI has become entangled with culture-war dynamics that obscure rather than illuminate the actual governance questions. In China, regulatory assertiveness on AI coexists with state-directed deployment that raises its own agency concerns — for the individual citizen, not the system.
The gap between what people want — more control, more say, more human agency in the AI era — and what institutions are delivering is widening. It is into this gap that the next generation of social innovators, philanthropists, and policymakers must step.
Philanthropy’s Critical Role in Shaping AI Guardrails and Opportunity
Here is where the story gets interesting — and where institutional funders, foundations, and philanthropic capital have a genuinely historic role to play that they have, with a handful of exceptions, yet to fully embrace.
The governance of AI — particularly of agentic AI systems acting autonomously in high-stakes domains — cannot be left to the companies building it, to legislators who struggle to define a “large language model” without staff assistance, or to the uncoordinated preferences of individual consumers. The OECD and the World Economic Forum have outlined frameworks, but frameworks without funding are architectural drawings without builders.
Philanthropy AI governance has become one of the most consequential and underfunded intersections in public life. The MacArthur Foundation, Ford Foundation, and a handful of tech-originated donors (Omidyar Network, Schmidt Futures) have begun investing in responsible AI research and policy. But the scale of investment remains dramatically misaligned with the scale of the disruption underway. According to the Brookings Institution, the communities most exposed to AI displacement — lower-income workers, first-generation professionals, workers in routine cognitive roles — are precisely those with the least access to reskilling resources, legal literacy about their rights, and political power to shape the governance conversation.
Philanthropic capital can address this at multiple levels. First, funding public dialogue: creating the forums, commissions, and civic processes through which communities can articulate what they want from AI and what they will not accept — the kind of deliberative democracy that corporate AI development timelines do not organically produce. Second, building ethical guardrails: supporting independent technical audits of AI agent systems, especially those deployed in high-stakes contexts like hiring, credit, legal aid, and healthcare. Third, investing aggressively in reskilling: not the corporate upskilling programs that optimize for the needs of existing employers, but the genuinely human-centered education investments that give people the capacity to navigate a changed economy on their own terms. Fourth, and most visibly, creating opportunity for young people — the generation that stands to be most directly affected by the removal of the proving grounds of professional learning.
The philanthropic AI governance opportunity is not about slowing innovation. It is about ensuring that the benefits of innovation are not captured exclusively by those who already own the infrastructure, while the costs — in disrupted careers, eroded agency, and stunted development — are borne by everyone else.
Reclaiming Agency: What Young People, Leaders, and Funders Must Do Now
The future of human agency in the AI era will not be decided in Palo Alto. It will be decided in classrooms, in courtrooms, in legislative chambers, in the board rooms of foundations, and in the daily choices of individuals about which tasks they hand to machines and which they insist on doing themselves — not because machines cannot do them, but because the doing is the point.
For young professionals — the generation navigating career advice in the AI age of 2026 — the imperative is not to compete with AI agents on their own terms. That is a race designed for machines. The imperative is to cultivate what agents cannot: moral judgment, relational intelligence, contextual wisdom, creative vision, the capacity to care about what you’re doing and why. These are not soft skills. They are the hardest skills. They compound over a lifetime in ways that no model weight or token count does. Protect your learning curve fiercely. Seek out the friction that develops judgment. Resist the temptation to outsource your thinking to systems that are, however impressive, fundamentally indifferent to your growth.
For leaders — in business, government, education, and civil society — the reclamation of agency requires building institutions that are honest about trade-offs. Does AI erode human agency? In its current deployment trajectory: yes, in specific and important ways. The right response is not panic, and it is not denial. It is design. Invest in human-AI collaboration frameworks that genuinely keep humans in the loop, not as a compliance formality but as a developmental reality. Design apprenticeship and mentorship structures that survive the automation of the tasks around which they were traditionally built. Insist on AI impact assessments before deploying agentic systems in professional and educational contexts. Make the question of human development central to every AI deployment decision, not an afterthought.
For funders: this is the decade. The governance architecture being built — or not built — around agentic AI will shape the relationship between human agency and technological systems for a generation. The window for influence is not permanently open. Foundations that move early, with real capital and genuine intellectual seriousness, can help write the rules. Foundations that wait will be left funding the repair.
The global dimension matters here, too. The most consequential AI governance battles of the next decade may not be fought in Washington or Brussels, but in the Global South — in countries where the intersection of demographic youth, expanding educational access, and AI-driven disruption of professional labor markets creates conditions for either extraordinary opportunity or extraordinary waste of human potential. Philanthropic AI governance that ignores Lagos, Jakarta, and São Paulo is not global governance. It is just wealthy-country governance wearing a global mask.
The story Silicon Valley is telling about the age of AI is seductive and, in many of its details, accurate. Autonomous agents will transform professional life. Productivity will rise. Some categories of work will disappear and others will emerge. The arc, the industry insists, bends toward abundance.
What the story omits is the quality of the lives lived along that arc. The lawyer who never argued. The accountant who never judged. The twenty-three-year-old who handed her first decade of professional development to a system that learned everything and taught her nothing.
Agency in the age of AI is not a footnote to the productivity story. It is the story that matters most.
Two tweets launched the age of agentic AI. What we do next — in philanthropy, in policy, in education, in the daily texture of our professional and personal choices — will determine whether this age expands or diminishes what it means to be a capable, purposeful human being.
The question is not what AI agents will do for us. The question is what kind of agents we will choose to become.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
Analysis
Is Anthropic Protecting the Internet — or Its Own Empire?
Anthropic Mythos, the most powerful AI model any lab has ever disclosed, arrived this week draped in the language of altruism. Project Glasswing — the initiative through which a curated circle of Silicon Valley aristocrats gains exclusive access to Mythos — is pitched as an act of civilizational defense. The framing is elegant, the mission is genuinely urgent, and at least part of it is true. But behind the Mythos AI release lies a second story that Dario Amodei’s beautifully worded blog posts conspicuously omit: Mythos is enterprise-only not merely because Anthropic fears hackers, but because releasing it to the open internet would trigger the single greatest act of industrial-scale capability theft in the history of technology. The cybersecurity rationale is real. The economic motive is realer still. Understanding both is how you understand the AI industry in 2026.
What Anthropic Mythos Actually Does — and Why It Terrified Silicon Valley
To appreciate the gatekeeping, you must first reckon with the capability. Mythos is not an incremental model. It occupies an entirely new tier in Anthropic’s architecture — internally designated Copybara — sitting above the public Haiku, Sonnet, and Opus hierarchy that most developers work with. SecurityWeek’s detailed technical breakdown describes it as a step change so pronounced that calling it an “upgrade” is like calling the internet an “improvement” on the fax machine.
The numbers are staggering. Anthropic’s own Frontier Red Team blog reports that Mythos autonomously reproduced known vulnerabilities and generated working proof-of-concept exploits on its very first attempt in 83.1% of cases. Its predecessor, Opus 4.6, managed that feat almost never — near-0% success rates on autonomous exploit development. Engineers with zero formal security training now tell colleagues of waking up to complete, working exploits they’d asked the model to develop overnight, entirely without intervention. One test revealed a 27-year-old bug lurking inside OpenBSD — an operating system historically celebrated for its security — that would allow any attacker to remotely crash any machine running it. Axios reported that Mythos found bugs in every major operating system and every major web browser, and that its Linux kernel analysis produced a chain of vulnerabilities that, strung together autonomously, would hand an attacker complete root control of any Linux system.
Compare that to Opus 4.6, which found roughly 500 zero-days in open-source software — itself a remarkable achievement. Mythos found thousands in a matter of weeks. It then attempted to exploit Firefox’s JavaScript engine and succeeded 181 times, compared to twice for Opus 4.6.
This is also, importantly, what a Claude Mythos vs open source cybersecurity comparison looks like at full resolution: no freely available model comes remotely close, and Anthropic knows it. That gap is the entire product.
The Official Narrative: “We’re Protecting the Internet”
The Anthropic enterprise-only AI decision is framed through Project Glasswing as a coordinated defensive effort — an attempt to patch the world’s most critical software before capability equivalents proliferate to hostile actors. Anthropic’s official Glasswing page commits $100 million in usage credits and $4 million in direct donations to open-source security organizations, with founding partners that read like a geopolitical alliance: Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, and Palo Alto Networks. Roughly 40 additional organizations maintaining critical software infrastructure also gain access. The initiative’s name — Glasswing, after a butterfly whose transparency makes it nearly invisible — is a metaphor for software vulnerabilities that hide in plain sight.
The security rationale for why Anthropic limited Mythos is not confected. In September 2025, a Chinese state-sponsored threat actor used earlier Claude models in what SecurityWeek documented as the first confirmed AI-orchestrated cyber espionage campaign — not merely using AI as an advisor but deploying it agentically to execute attacks against roughly 30 organizations. If that was possible with Claude’s then-current models, what becomes possible with a model that autonomously chains Linux kernel exploits at a near-perfect success rate?
Anthropic’s Logan Graham, head of the Frontier Red Team, captured the threat succinctly: imagine this level of capability in the hands of Iran in a hot war, or Russia as it attempts to degrade Ukrainian infrastructure. That is not science fiction. It is the calculus driving the controlled release. Briefings to CISA, the Commerce Department, and the Center for AI Standards and Innovation are real, however conspicuously absent the Pentagon remains from those conversations — a pointed omission given Anthropic’s ongoing legal war with the Defense Department over its blacklisting.
So yes: the security case is genuine. But it is, at most, half the story.
The Distillation Flywheel: Why Frontier Labs Are Really Gating Their Best Models
Here is the economic argument that no TechCrunch brief or Bloomberg data point has assembled cleanly: Anthropic model distillation is an existential threat to the frontier lab business model, and Mythos is as much a response to that threat as it is a cybersecurity initiative.
The mathematics of adversarial distillation are brutally asymmetric. Training a frontier model costs approximately $1 billion in compute. Successfully distilling it into a competitive student model costs an adversary somewhere between $100,000 and $200,000 — a 5,000-to-one cost advantage in the favor of the copier. No rate-limiting policy, no terms-of-service clause, and no click-through agreement closes that gap. The only defense is controlling access to the teacher in the first place.
Frontier lab distillation blocking is not a new concern, but 2026 has given it terrifying specificity. Anthropic publicly disclosed in February that three Chinese AI laboratories — DeepSeek, Moonshot AI, and MiniMax — collectively generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts. MiniMax alone accounted for 13 million of those exchanges; Moonshot AI added 3.4 million; DeepSeek, notably, needed only 150,000 because it was targeting something far more specific: how Claude refuses things — alignment behavior, policy-sensitive responses, the invisible architecture of safety. A stripped copy of a frontier model without its alignment training, deployed at nation-state scale for disinformation or surveillance, is the nightmare scenario that animated Anthropic’s founding. It may now be unfolding in real time.
What does this have to do with Mythos being enterprise-only? Everything. A model that autonomously writes working exploits for every major OS would, if released via standard API access, provide Chinese distillation campaigns with not just conversational capability but offensive cyber capability — the very thing that makes Mythos commercially unique. Releasing Mythos at scale would be, simultaneously, the greatest act of market self-destruction and the greatest gift to adversarial state actors in the history of enterprise software. Enterprise-only access eliminates both risks at once: it monetizes the capability at maximum margin while denying it to the distillation ecosystem.
This is the distillation flywheel in action. Frontier labs gate the highest-capability models behind enterprise contracts; enterprises pay premium rates for exclusive capability access; the revenue funds the next generation of training runs; the new model is again too powerful to release openly. Each rotation of the wheel deepens the competitive moat, raises the enterprise price floor, and tightens the grip of the three dominant labs over the global AI stack.
Geopolitics at the Model Layer: The Three-Lab Alliance and the New AI Cold War
The Mythos security exploits announcement arrived within 24 hours of a Bloomberg-reported development that is arguably more consequential for the global technology order: OpenAI, Anthropic, and Google — three companies that have spent the better part of three years competing to annihilate each other — began sharing adversarial distillation intelligence through the Frontier Model Forum. The cooperation, modeled on how cybersecurity firms exchange threat data, represents the first substantive operational use of the Forum since its 2023 founding.
The breakdown of what each Chinese lab extracted from Claude reveals something remarkable: three entirely different product strategies, fingerprinted through their query patterns. MiniMax vacuumed broadly — generalist capability extraction at scale. Moonshot AI targeted the exact agentic reasoning and computer-use stack that its Kimi product has been marketing since late 2025. DeepSeek, with a comparatively tiny 150,000-exchange footprint, was almost exclusively interested in Claude’s alignment layer — how it handles policy-sensitive queries, how it refuses, how it behaves at the edges. Each lab was essentially reverse-engineering not just a model but a business plan.
The MIT research documented in December 2025 found that GLM-series models identify themselves as Claude approximately half the time when queried through certain paths — behavioral residue of distillation that no fine-tuning has fully scrubbed. US officials estimate the financial toll of this campaign in the billions annually. The Trump administration’s AI Action Plan has already called for a formal inter-industry sharing center, essentially institutionalizing what the labs are now doing informally.
The geopolitical stakes here extend far beyond corporate IP. When DeepSeek released its R1 model in January 2025 — a model widely believed to incorporate distilled knowledge from OpenAI’s infrastructure — it erased nearly $1 trillion from US and European tech stocks in a single trading session. Markets now understand something that policymakers are only beginning to grasp: control over frontier AI model capabilities is a form of strategic leverage, and distillation is a vector for transferring that leverage without a single line of export-controlled chip silicon crossing a border.
Enterprise Contracts and the New AI Treadmill
The economics of Anthropic enterprise-only AI are becoming increasingly clear as 2026 revenue data enters the public domain.
| Metric | February 2026 | April 2026 |
|---|---|---|
| Anthropic Run-Rate Revenue | $14B | $30B+ |
| Enterprise Share of Revenue | ~80% | ~80% |
| Customers Spending $1M+ Annually | 500 | 1,000+ |
| Claude Code Run-Rate Revenue | $2.5B | Growing rapidly |
| Anthropic Valuation | $380B | ~$500B+ (IPO target) |
| OpenAI Run-Rate Revenue | ~$20B | ~$24-25B |
Sources: CNBC, Anthropic Series G announcement, Sacra
Anthropic’s annualized revenue has now surpassed $30 billion — having started 2025 at roughly $1 billion — representing one of the most dramatic B2B revenue trajectories in the history of enterprise software. Sacra estimates that 80% of that revenue flows from business clients, with enterprise API consumption and reserved-capacity contracts forming the structural backbone. Eight of the Fortune 10 are now Claude customers. Four percent of all public GitHub commits are now authored by Claude Code.
What Project Glasswing does, in this context, is elegant: it creates a new category of enterprise relationship — not API access, not subscription, but strategic partnership with a frontier safety lab deploying the world’s most capable unrestricted model. The 40 organizations in the Glasswing program are not merely beta testers. They are, from a revenue architecture standpoint, being trained — habituated to Mythos-class capability before it becomes generally available, embedded in their security workflows, their CI/CD pipelines, their vulnerability management systems. By the time Mythos-class models are released at scale with appropriate safeguards, the switching cost will be prohibitive.
This is the AI treadmill: each generation of frontier capability, released exclusively to enterprise partners first, creates a loyalty layer that commoditized open-source alternatives cannot easily displace. The $100 million in Glasswing credits is not charity. It is customer acquisition at an unprecedented model tier.
The Counter-View: Responsible Deployment Has a Principled Case
It would be intellectually dishonest to leave the distillation-flywheel critique standing without challenge. The counter-argument is real, and it deserves full articulation.
Platformer’s analysis makes the most compelling version of the responsible-rollout defense: Anthropic’s founding premise was that a safety-focused lab should be the first to encounter the most dangerous capabilities, so it could lead mitigation rather than react to catastrophe. With Mythos, that appears to be exactly what is happening. The company did not race to monetize these cybersecurity capabilities. It briefed government agencies, convened a defensive consortium, committed $4 million to open-source security projects, and staged rollout behind a coordinated patching effort. The vulnerabilities Mythos found in Firefox, Linux, and OpenBSD are being disclosed and patched before the paper trail of their discovery becomes public — precisely the protocol that responsible security research demands.
Alex Stamos, whose expertise in adversarial security spans decades, offered the optimistic framing: if Mythos represents being “one step past human capabilities,” there is a finite pool of ancient flaws that can now be systematically found and fixed, potentially producing software infrastructure more fundamentally secure than anything achievable through traditional auditing. That is not corporate spin. It is a coherent theory of defensive AI benefit.
The Mythos AI release strategy also reflects a genuinely novel regulatory challenge: the EU AI Act’s next enforcement phase takes effect August 2, 2026, introducing incident-reporting obligations and penalties of up to 3% of global revenue for high-risk AI systems. A general release of Mythos into that environment — without governance infrastructure in place — would be commercially catastrophic as well as potentially harmful. Enterprise-gated release buys time for both the regulatory and technical scaffolding to mature.
What Regulators and Open-Source Advocates Must Do Next
The policy implications of Anthropic Mythos extend far beyond one company’s release strategy. They illuminate a structural shift in how frontier AI capability is being distributed — and by whom, and to whom.
For regulators, the Glasswing model raises questions that existing frameworks cannot answer. If a private company now possesses working zero-day exploits for virtually every major software system on earth — as Kelsey Piper pointedly observed — what obligations of disclosure and oversight apply? The fact that Anthropic is briefing CISA and the Center for AI Standards and Innovation is encouraging, but voluntary briefings are not governance. The EU’s AI Act and the US AI Action Plan both need explicit provisions covering what happens when a commercially controlled lab becomes the de facto custodian of the world’s most significant vulnerability database.
For open-source advocates, the distillation dynamic poses an existential dilemma. The same economic logic that drives labs to gate Mythos also drives them to resist open-weights releases of any model that approaches frontier capability. The three-lab alliance against Chinese distillation is, viewed from a certain angle, also an alliance against open-source proliferation of frontier capability — regardless of the nationality of the developer doing the distilling. Open-source foundations, university research labs, and sovereign AI initiatives in Europe, the Middle East, and South Asia should be pressing hard for access frameworks that allow defensive cybersecurity use of frontier capability without being filtered through the commercial relationships of Silicon Valley.
For enterprise decision-makers, the message is unambiguous: the organizations that embed Mythos-class capability into their vulnerability management workflows now will hold a structural security advantage — measured in patch latency and zero-day coverage — over those that wait for open-source equivalents. But that advantage comes with dependency on a single private entity whose political entanglements, from Pentagon disputes to Chinese state-actor confrontations, introduce supply-chain risks that no CISO should ignore.
Anthropic may well be protecting the internet. It is certainly protecting its empire. In 2026, those two imperatives have become so entangled that distinguishing them may be the most important work left for anyone who cares about who controls the infrastructure of the digital world.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
AI
Anthropic Rolls Out Its Most Powerful Cyber AI Model — Days After Leaking Its Own Source Code
The launch of Claude Mythos Preview and Project Glasswing, mere days after Anthropic accidentally exposed 512,000 lines of its core product’s source code to the world, is either the most audacious act of strategic redirection in Silicon Valley history — or the most revealing window yet into the contradictions at the heart of frontier AI development.
There is a particular species of Silicon Valley irony that only manifests at the very frontier of technological ambition. On March 31st, 2026, an Anthropic employee made a mistake so elementary it would embarrass a first-year computer science undergraduate: a debug source map file was accidentally bundled into a public software release, pointing to a cloud-hosted archive of the company’s most commercially prized product — the source code of Claude Code, its flagship agentic coding assistant. Within hours, 512,000 lines of proprietary TypeScript code, across 1,906 files, were mirrored, forked, and torrent-distributed across the internet, never to be recalled. The repository on GitHub was forked more than 41,500 times before Anthropic could blink. Then, seven days later, Anthropic announced the most capable AI model it has ever built — a cybersecurity behemoth called Claude Mythos Preview — and launched Project Glasswing, a sweeping initiative to secure the world’s critical digital infrastructure. The company publicly described it as a watershed for global security. A watching world could be forgiven for raising an eyebrow.
History rarely serves up irony quite this rich. The firm that accidentally handed a blueprint of its proprietary agent harness to thousands of developers, threat actors, and competitors — the firm that inadvertently revealed the internal codename of its most powerful unreleased model buried in that same code — emerged days later as the standard-bearer for a new era of AI-powered cyber defence. It is, depending on your interpretation, either a masterclass in narrative control or a deeply unsettling indicator of the structural tensions now embedded in the development of frontier AI.
I. A Double Embarrassment: The Anatomy of the Leak
The facts of the Anthropic source code leak are simultaneously mundane and extraordinary. On the morning of March 31st, 2026, Anthropic pushed version 2.1.88 of its @anthropic-ai/claude-code package to the npm public registry. Buried inside was a 59.8-megabyte JavaScript source map file — a developer debugging tool that, when followed to its reference URL on Anthropic’s own Cloudflare R2 storage bucket, yielded a downloadable zip archive of the complete, unobfuscated TypeScript source for Claude Code.
Security researcher Chaofan Shou, an intern at Solayer Labs, spotted the exposure at 4:23 AM Eastern and posted a direct download link on X. It was, as The Register reported, “a mistake as bad as leaving a map file in a publish configuration” — a single misconfigured .npmignore field. A known bug in Bun, the JavaScript runtime Anthropic had acquired in late 2025, had been causing source maps to ship in production builds for twenty days before the incident. Nobody caught it.
This was, in fact, the second major accidental disclosure of the month. Days earlier, Fortune had reported on a separate leak of nearly 3,000 files from a misconfigured content management system — including a draft blog post describing a forthcoming model described internally as “by far the most powerful AI model” Anthropic had ever developed. That model’s codename: Mythos. Also, apparently: Capybara.
The March–April 2026 Anthropic Disclosure Timeline
| Date | Event |
|---|---|
| ~Late March 2026 | Fortune reports on ~3,000 leaked CMS files; first public confirmation of the Mythos model’s existence and capabilities. |
| March 31, 2026 | Claude Code v2.1.88 ships to npm with embedded source map; 512,000 lines of TypeScript exposed within hours. GitHub repository forked 41,500+ times. |
| March 31 – April 6 | Anthropic issues DMCA takedowns; threat actors seed trojanized forks with backdoors and cryptominers. Axios supply-chain attack occurs simultaneously. |
| April 7, 2026 | Anthropic officially announces Claude Mythos Preview and Project Glasswing. Partners include Apple, Microsoft, Google, Amazon, JPMorgan Chase, and others. |
What the leaked source revealed was considerable: 44 hidden feature flags for unshipped capabilities, a sophisticated three-layer memory architecture, the internal orchestration logic for autonomous “daemon mode” background agents, and — critically — confirmation that a model called Capybara was actively being readied for launch. The VentureBeat analysis noted that Claude Code had achieved an annualised recurring revenue run rate of $2.5 billion by March 2026, making the intellectual property exposure a genuinely material event for a company preparing to go public.
II. Claude Mythos Preview and Project Glasswing: A Technical Step-Change
To understand why the timing of the Mythos announcement matters, one must first grasp the scale of what Anthropic is claiming. Claude Mythos Preview is not a marginal improvement on its predecessors. It occupies, in Anthropic’s internal taxonomy, a fourth tier entirely above the existing Haiku–Sonnet–Opus range — a tier the company internally designates “Copybara.” According to SecurityWeek, it represents “not an incremental improvement but a step change in performance.”
The headline claim is breathtaking in its scope. In the weeks prior to the public announcement, Anthropic ran Mythos against real open-source codebases and, according to its own Project Glasswing announcement, the model identified thousands of zero-day vulnerabilities — flaws previously unknown to software maintainers — across every major operating system and every major web browser. The oldest vulnerability it uncovered was a 27-year-old bug in OpenBSD, a system famous for its security record. A 16-year-old flaw in video processing software survived five million automated test attempts before Mythos found it in a matter of hours. The model autonomously chained together a series of Linux kernel vulnerabilities into a privilege escalation exploit — the kind of attack chain that would previously have required a sophisticated, nation-state-grade human research team.
A single AI agent could scan for vulnerabilities and potentially take advantage of them faster and more persistently than hundreds of human hackers — and similar capabilities will be available across the industry in as little as six months.
The Axios reporting on the rollout puts the dual-use risk with uncomfortable clarity: Mythos is “extremely autonomous” and possesses the reasoning capabilities of an advanced security researcher, capable of finding “tens of thousands of vulnerabilities” that even elite human bug hunters would miss. This is precisely why Anthropic chose not to release it publicly. Instead, Project Glasswing gives curated preview access to 40-plus organisations responsible for critical software infrastructure — including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks — backed by up to $100 million in usage credits and $4 million in direct donations to open-source security organisations including the Apache Software Foundation and OpenSSF.
The model is not cybersecurity-specific. CNBC noted that Mythos’s cyber prowess is a downstream consequence of its exceptional general-purpose coding and reasoning capabilities — a distinction with profound regulatory implications. You cannot restrict a model trained to think brilliantly about code from thinking brilliantly about vulnerabilities in that code.
III. The Deeper Meaning: Irony, Competence, and the New Security Paradigm
The central paradox demands direct engagement: Anthropic, a company whose founding proposition is responsible AI development, leaked its own product’s source code through a packaging error so elementary it required no sophistication to exploit. It then, within the same news cycle, announced an AI model so powerful its own CEO fears its public release — and positioned itself as the primary steward of global cyber defence. One is entitled to hold both thoughts simultaneously.
And yet the strategic coherence of the Mythos launch, viewed against the backdrop of the leak, is hard to dismiss entirely. Anthropic did not choose the timing. The Mythos project had been in development and partner testing for weeks before the Claude Code source code escaped its containment. But the company, having already suffered the reputational bruise of one accidental exposure too many, had an imperative to seize the narrative — to move from embarrassed leaker to principled guardian, rapidly. The result is a masterclass in what crisis communications professionals call “agenda replacement.”
The deeper issue, however, is structural and it transcends any single company. The Axios assessment is stark: Mythos is “the first AI model that officials believe is capable of bringing down a Fortune 100 company, crippling swaths of the internet or penetrating vital national defense systems.” Meanwhile, the head of Anthropic’s frontier red team, Logan Graham, told multiple outlets that comparable capabilities will be in the hands of the broader AI industry within six to eighteen months — from every nation with frontier ambitions, not just the United States. The window for getting ahead of this threat is not a decade. It is, at most, a year.
What the Mythos launch crystallises is a principle that the cybersecurity community has long understood but that corporate AI leaders and policymakers have been reluctant to internalise: the same model property that makes an AI system valuable for defence makes it catastrophically useful for offence. The technical writeup on Anthropic’s red team blog makes this explicit. Mythos can “reverse-engineer exploits on closed-source software” and turn known-but-unpatched vulnerabilities into working exploits. Gadi Evron, founder of AI security firm Knostic, told CNN that “attack capabilities are available to attackers and defenders both, and defenders must use them if they’re to keep up.” There is no asymmetry available — only the question of who moves first.
IV. The Geopolitical and Regulatory Reckoning
The implications of Anthropic Mythos extend well beyond corporate strategy. The U.S.-China AI competition has already entered the domain of active cyber operations. A Chinese state-sponsored group, as Fortune reported, used an earlier Claude model to target approximately 30 organisations in a coordinated espionage campaign before Anthropic detected and curtailed the activity. If a Claude model that predates Mythos by several capability generations was sufficient to mount a significant intelligence operation, the implications of Mythos-class capability in hostile hands are genuinely alarming.
A source briefed on Mythos told Axios: “An enemy could reach out and touch us in a way they can’t or won’t with kinetic operations. For most Americans, a conventional conflict is ‘over there.’ With a cyberattack, it’s right here.” This framing matters. The doctrine of nuclear deterrence rested partly on the difficulty of acquisition. The doctrine of cyber deterrence in the Mythos era rests on nothing — the marginal cost of deploying AI-accelerated attack capability approaches zero for any state or non-state actor with API access to a comparable model.
Anthropic’s relationship with Washington is, to put it diplomatically, complicated. The company is simultaneously briefing the Cybersecurity and Infrastructure Security Agency, the Commerce Department, and senior officials across the federal government on Mythos’s capabilities — while locked in active litigation with the Pentagon, which has labelled Anthropic a supply-chain risk following the company’s refusal to permit autonomous targeting or battlefield surveillance applications. The AI safety firm that declined to arm American drones is now, in the same breath, offering American critical infrastructure a first-mover advantage against AI-powered adversaries. The philosophical coherence of this position is defensible; its political navigation will be considerably harder.
For regulators, the Mythos announcement poses a question for which existing frameworks have no satisfying answer. The EU AI Act’s tiered risk classifications were not designed for a model that is simultaneously a breakthrough productivity tool, a national security asset, and a potential weapon of mass cyber-disruption. The Project Glasswing model — voluntary, industry-led, access-gated — is a plausible short-term mechanism. It is not a durable regulatory framework. And as Logan Graham made clear, the window before other frontier labs — and the Chinese state — reach comparable capability is measured in months, not years.
V. Verdict: A Reckoning Dressed as a Launch
Editorial Assessment
The Mythos announcement is not primarily a product launch. It is a reckoning — one that Anthropic has had the narrative dexterity to package as a strategic initiative rather than a confession. The source code leak was, at the level of operational security, an embarrassment of the first order. But it was also, unintentionally, a proof of concept for the vulnerability landscape that Mythos was built to address. Anthropic’s own systems failed a test far simpler than any that Mythos could conceivably pose to a determined adversary.
That irony is not merely cosmetic. It is instructive. No organisation — not even a frontier AI lab whose entire value proposition rests on the responsible management of powerful systems — is immune to the mundane failure modes of human error, toolchain misconfiguration, and the accumulated technical debt of moving too fast. The question is not whether Anthropic can be trusted with Mythos. The question is whether any institution, in any country, is structurally capable of managing the governance of AI capabilities that are advancing faster than the legal and regulatory architectures designed to contain them.
Dario Amodei framed the Project Glasswing rollout as an opportunity to “create a fundamentally more secure internet and world than we had before the advent of AI-powered cyber capabilities.” This is not rhetorical excess. It is, technically, accurate: the same capability that can chain together a 27-year-old kernel vulnerability into a privilege escalation exploit can, in the hands of defenders, systematically eliminate such vulnerabilities from the world’s most important software. The question is not whether this technology is transformative. It is whether the institutional infrastructure required to ensure that transformation benefits defenders more than attackers can be assembled in the time available.
Six months. Eighteen at the outside. That is the horizon Logan Graham has placed on the proliferation of Mythos-class capabilities across the industry. The global financial cost of cybercrime already runs to an estimated $500 billion annually, a figure that was compiled before any model approached Mythos’s level of autonomous vulnerability discovery. Policymakers in Washington, Brussels, and Beijing who are not currently treating this as an emergency are, as one source briefed on Mythos told Axios with commendable directness, “not remotely ready.”
Anthropic rolled out its most powerful cyber AI model days after leaking its own source code. The irony is real. So is the threat. And so, potentially, is the opportunity — if the institutions responsible for governing it can move at the speed the technology demands, rather than the speed at which governments customarily prefer to operate. History suggests that gap will be considerable. The Mythos timeline suggests that gap may, for once, be decisive.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
-
Markets & Finance3 months agoTop 15 Stocks for Investment in 2026 in PSX: Your Complete Guide to Pakistan’s Best Investment Opportunities
-
Analysis2 months agoBrazil’s Rare Earth Race: US, EU, and China Compete for Critical Minerals as Tensions Rise
-
Analysis2 months agoTop 10 Stocks for Investment in PSX for Quick Returns in 2026
-
Banks3 months agoBest Investments in Pakistan 2026: Top 10 Low-Price Shares and Long-Term Picks for the PSX
-
Investment3 months agoTop 10 Mutual Fund Managers in Pakistan for Investment in 2026: A Comprehensive Guide for Optimal Returns
-
Global Economy4 months agoPakistan’s Export Goldmine: 10 Game-Changing Markets Where Pakistani Businesses Are Winning Big in 2025
-
Asia4 months agoChina’s 50% Domestic Equipment Rule: The Semiconductor Mandate Reshaping Global Tech
-
Global Economy4 months ago15 Most Lucrative Sectors for Investment in Pakistan: A 2025 Data-Driven Analysis
