AI
The Price of Algorithmic War: How AI Became the New Dynamite in the Middle East
The Iran conflict has turned frontier AI models into contested weapons of state — and the financial and human fallout is only beginning to register.
In the first eleven days of the U.S.-Israeli offensive against Iran, which began on February 28, 2026, American and Israeli forces executed roughly 5,500 strikes on Iranian targets. That is an operational tempo that would have required months in any previous conflict — made possible, in significant part, by artificial intelligence. In the first eleven days of the conflict, America achieved an astonishing 5,500 strikes, using AI on a large-scale battlefield for the first time at this scale. The National The same week those bombs fell, a legal and commercial crisis erupted in Silicon Valley with consequences that will define the AI industry for years. Both events are part of the same story.
We are living through the moment when AI ceased being a future-war thought experiment and became an operational reality — embedded in targeting pipelines, shaping intelligence assessments, and now at the center of a constitutional showdown between a frontier AI company and the United States government. Alfred Nobel, who invented dynamite and then spent the remainder of his life in tortured ambivalence about it, would have recognized the pattern immediately.
The Kill Chain, Accelerated
The joint U.S. and Israeli offensive on Iran revealed how algorithm-based targeting and data-driven intelligence are reforming the mechanics of warfare. In the first twelve hours alone, U.S. and Israeli forces reportedly carried out nearly 900 strikes on Iranian targets — an operational tempo that would have taken days or even weeks in earlier conflicts. Interesting Engineering
At the technological center of this acceleration sits a system most Americans have never heard of: Project Maven. Anthropic’s Claude has become a crucial component of Palantir’s Maven intelligence analysis program, which was also used in the U.S. operation to capture Venezuelan President Nicolás Maduro. Claude is used to help military analysts sort through intelligence and does not directly provide targeting advice, according to a person with knowledge of Anthropic’s work with the Defense Department. NBC News This is a distinction with genuine moral weight — between decision-support and decision-making — but one that is becoming harder to sustain at the speed at which modern targeting now operates.
Critics warn that this trend could compress decision timelines to levels where human judgment is marginalized, ushering in an era of warfare conducted at what has been described as “faster than the speed of thought.” This shortening interval raises fears that human experts may end up merely approving recommendations generated by algorithms. In an environment dictated by speed and automation, the space for hesitation, dissent, or moral restraint may be shrinking just as quickly. Interesting Engineering
The U.S. military’s posture has been notably sanguine about these concerns. Admiral Brad Cooper, head of U.S. Central Command, confirmed that AI is helping soldiers process troves of data, stressing that humans make final targeting decisions — but critics note the gap between that principle and verifiable practice remains wide. Al Jazeera
The Financial Architecture of AI Warfare
The economic dimensions of this transformation are substantial and largely unreported in their full complexity. Understanding them requires holding three separate financial narratives simultaneously.
The direct contract market is the most visible layer. Over the past year, the U.S. Department of Defense signed agreements worth up to $200 million each with several major AI companies, including Anthropic, OpenAI, and Google. CNBC These are not trivial sums in isolation, but they represent the seed capital of a much larger transformation. The military AI market is projected to reach $28.67 billion by 2030, as the speed of military decision-making begins to surpass human cognitive capacity. Emirates 24|7
The collateral economic disruption is less discussed but potentially far larger. On March 1, Iranian drone strikes took out three Amazon Web Services facilities in the Middle East — two in the UAE and one in Bahrain — in what appear to be the first publicly confirmed military attacks on a hyperscale cloud provider. The strikes devastated cloud availability across the region, affecting banks, online payment platforms, and ride-hailing services, with some effects felt by AWS users worldwide. The Motley Fool The IRGC cited the data centers’ support for U.S. military and intelligence networks as justification. This represents a strategic escalation that no risk-management framework in the technology sector adequately anticipated: cloud infrastructure as a legitimate military target.
The reputational and legal costs of AI’s battlefield role may ultimately dwarf both. Anthropic’s court filings stated that the Pentagon’s supply-chain designation could cut the company’s 2026 revenue by several billion dollars and harm its reputation with enterprise clients. A single partner with a multi-million-dollar contract has already switched from Claude to a competing system, eliminating a potential revenue pipeline worth more than $100 million. Negotiations with financial institutions worth approximately $180 million combined have also been disrupted. Itp
The Anthropic-Pentagon Fracture: A Defining Test
The dispute between Anthropic and the U.S. Department of Defense is not merely a contract negotiation gone wrong. It is the first high-profile case in which a frontier AI company drew a public ethical line — and then watched the government attempt to destroy it for doing so.
The sequence of events is now well-documented. The administration’s decisions capped an acrimonious dispute over whether Anthropic could prohibit its tools from being used in mass surveillance of American citizens or to power autonomous weapon systems, as part of a military contract worth up to $200 million. Anthropic said it had tried in good faith to reach an agreement, making clear it supported all lawful uses of AI for national security aside from two narrow exceptions. NPR
When Anthropic held its position, the response was unprecedented in the annals of U.S. technology policy. Defense Secretary Pete Hegseth declared Anthropic a supply chain risk in a statement so broad that it can only be seen as a power play aimed at destroying the company. Shortly thereafter, OpenAI announced it had reached its own deal with the Pentagon, claiming it had secured all the safety terms that Anthropic sought, plus additional guardrails. Council on Foreign Relations
In an extraordinary move, the Pentagon designated Anthropic a supply chain risk — a label historically only applied to foreign adversaries. The designation would require defense vendors and contractors to certify that they don’t use the company’s models in their work with the Pentagon. CNBC That this was applied to a U.S.-headquartered company, founded by former employees of a U.S. nonprofit, and valued at $380 billion, represents a remarkable inversion of the logic the designation was designed to serve.
Meanwhile, Washington was attacking an American frontier AI leader while Chinese labs were on a tear. In the past month alone, five major Chinese models dropped: Alibaba’s Qwen 3.5, Zhipu AI’s GLM-5, MiniMax’s M2.5, ByteDance’s Doubao 2.0, and Moonshot’s Kimi K2.5. Council on Foreign Relations The geopolitical irony is not subtle: in punishing a safety-focused American AI company, the administration may have handed Beijing its most useful competitive gift of the year.
The Human Cost: Social Ramifications No Algorithm Can Compute
Against the financial ledger, the humanitarian accounting is staggering and still incomplete.
The Iranian Red Crescent Society reported that the U.S.-Israeli bombardment campaign damaged nearly 20,000 civilian buildings and 77 healthcare facilities. Strikes also hit oil depots, several street markets, sports venues, schools, and a water desalination plant, according to Iranian officials. Al Jazeera
The case that has attracted the most scrutiny is the bombing of the Shajareh Tayyebeh elementary school in Minab, southern Iran. A strike on the school in the early hours of February 28 killed more than 170 people, most of them children. More than 120 Democratic members of Congress wrote to Defense Secretary Hegseth demanding answers, citing preliminary findings that outdated intelligence may have been to blame for selecting the target. NBC News
The potential connection to AI decision-support systems is explored with forensic precision by experts at the Bulletin of the Atomic Scientists. One analysis notes that the mistargeting could have stemmed from an AI system with access to old intelligence — satellite data that predated the conversion of an IRGC compound into an active school — and that such temporal reasoning failures are a known weakness of large language models. Even with humans nominally “in the loop,” people frequently defer to algorithmic outputs without careful independent examination. Bulletin of the Atomic Scientists
The social fallout extends well beyond individual atrocities. Israel’s Lavender AI-powered database, used to analyze surveillance data and identify potential targets in Gaza, was wrong at least 10 percent of the time, resulting in thousands of civilian casualties. A recent study found that AI models from OpenAI, Anthropic, and Google opted to use nuclear weapons in simulated war games in 95 percent of cases. Rest of World The simulation result does not predict real-world behavior, but it reveals how strategic reasoning models can default toward extreme outcomes under pressure — a finding that ought to unsettle anyone who imagines that algorithmic warfare is inherently more precise than the human kind.
The corrosion of accountability is perhaps the most insidious long-term social effect. “There is no evidence that AI lowers civilian deaths or wrongful targeting decisions — and it may be that the opposite is true,” says Craig Jones, a political geographer at Newcastle University who researches military targeting. Nature Yet the speed and opacity of AI-assisted operations makes it exponentially harder to assign responsibility when things go wrong. Algorithms do not face courts-martial.
Governance: The International Gap
Rapid technological development is outpacing slow international discussions. Academics and legal experts meeting in Geneva in March 2026 to discuss lethal autonomous weapons systems found themselves studying a technology already being used at scale in active conflicts. Nature The gap between the pace of deployment and the pace of governance has never been wider.
The Middle East and North Africa are arguably the most conflict-ridden and militarized regions in the world, with four out of eleven “extreme conflicts” identified in 2024 by the Armed Conflict Location and Event Data organization occurring there. The region has become a testing ground for AI warfare whose lessons — and whose errors — will shape every future conflict. War on the Rocks
The legal framework governing AI in warfare remains, generously described, aspirational. The U.S. military’s stated commitment to keeping “humans in the loop” is a principle that has no internationally binding enforcement mechanism, no agreed definition of what meaningful human control actually entails, and no independent auditing process. One expert observed that the biggest danger with AI is when humans treat it as an all-purpose solution rather than something that can speed up specific processes — and that this habit of over-reliance is particularly lethal in a military context. The National
AI as the New Dynamite: Nobel’s Unresolved Legacy
When Alfred Nobel invented dynamite in 1867, he believed — genuinely — that a weapon so devastatingly efficient would make war unthinkably costly and therefore rare. He was catastrophically wrong. The Franco-Prussian War, the First World War, and the entire industrial-era atrocity that followed proved that more powerful weapons do not deter wars; they escalate them, and they increase civilian mortality relative to combatant casualties.
The parallel to AI is not decorative. The argument for AI in warfare — that algorithmic precision reduces collateral damage, that faster targeting shortens conflicts, that autonomous systems absorb military risk that would otherwise fall on human soldiers — is structurally identical to Nobel’s argument for dynamite. It is the rationalization of a dual-use technology by those with an interest in its proliferation.
Drone technology in the Middle East has already shifted from manual control toward full autonomy, with “kamikaze” drones utilizing computer vision to strike targets independently if communications are severed. As AI becomes more integrated into militaries, the advancements will become even more pronounced with “unpredictable, risky, and lethal consequences,” according to Steve Feldstein, a senior fellow at the Carnegie Endowment for International Peace. Rest of World
The Anthropic dispute, whatever its ultimate legal resolution, has surfaced a question that Silicon Valley has been able to defer until now: can a technology company that builds frontier AI models — systems capable of synthesizing intelligence, generating targeting assessments, and running strategic simulations — genuinely control how those systems are used once deployed by a state? As OpenAI’s own FAQ acknowledged when asked what would happen if the government violated its contract terms: “As with any contract, we could terminate it.” The entire edifice of AI safety in warfare, for now, rests on the contractual leverage of companies that have already agreed to participate. Council on Foreign Relations
Nobel at least had the decency to endow prizes. The AI industry is still working out what it owes.
Policy Recommendations
A minimally adequate governance framework for AI in warfare would need to accomplish several things. Independent verification of “human in the loop” claims — not merely the assertion of it — is the essential starting point. Mandatory after-action reporting on AI involvement in any strike that results in civilian casualties would create accountability where none currently exists. International agreement on a baseline error-rate threshold — above which AI targeting systems may not be used without additional human review — would translate abstract humanitarian law into operational reality.
The technology companies themselves bear responsibility that no contract clause can fully discharge. Researchers from OpenAI, Google DeepMind, and other labs submitted a court filing supporting Anthropic’s position, arguing that restrictions on domestic surveillance and autonomous weapons are reasonable until stronger legal safeguards are established. ColombiaOne That the most capable AI builders in the world believe their own technology is not yet reliable enough for autonomous lethal use is information that should be at the center of every policy debate — not buried in court filings.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
Analysis
Is Anthropic Protecting the Internet — or Its Own Empire?
Anthropic Mythos, the most powerful AI model any lab has ever disclosed, arrived this week draped in the language of altruism. Project Glasswing — the initiative through which a curated circle of Silicon Valley aristocrats gains exclusive access to Mythos — is pitched as an act of civilizational defense. The framing is elegant, the mission is genuinely urgent, and at least part of it is true. But behind the Mythos AI release lies a second story that Dario Amodei’s beautifully worded blog posts conspicuously omit: Mythos is enterprise-only not merely because Anthropic fears hackers, but because releasing it to the open internet would trigger the single greatest act of industrial-scale capability theft in the history of technology. The cybersecurity rationale is real. The economic motive is realer still. Understanding both is how you understand the AI industry in 2026.
What Anthropic Mythos Actually Does — and Why It Terrified Silicon Valley
To appreciate the gatekeeping, you must first reckon with the capability. Mythos is not an incremental model. It occupies an entirely new tier in Anthropic’s architecture — internally designated Copybara — sitting above the public Haiku, Sonnet, and Opus hierarchy that most developers work with. SecurityWeek’s detailed technical breakdown describes it as a step change so pronounced that calling it an “upgrade” is like calling the internet an “improvement” on the fax machine.
The numbers are staggering. Anthropic’s own Frontier Red Team blog reports that Mythos autonomously reproduced known vulnerabilities and generated working proof-of-concept exploits on its very first attempt in 83.1% of cases. Its predecessor, Opus 4.6, managed that feat almost never — near-0% success rates on autonomous exploit development. Engineers with zero formal security training now tell colleagues of waking up to complete, working exploits they’d asked the model to develop overnight, entirely without intervention. One test revealed a 27-year-old bug lurking inside OpenBSD — an operating system historically celebrated for its security — that would allow any attacker to remotely crash any machine running it. Axios reported that Mythos found bugs in every major operating system and every major web browser, and that its Linux kernel analysis produced a chain of vulnerabilities that, strung together autonomously, would hand an attacker complete root control of any Linux system.
Compare that to Opus 4.6, which found roughly 500 zero-days in open-source software — itself a remarkable achievement. Mythos found thousands in a matter of weeks. It then attempted to exploit Firefox’s JavaScript engine and succeeded 181 times, compared to twice for Opus 4.6.
This is also, importantly, what a Claude Mythos vs open source cybersecurity comparison looks like at full resolution: no freely available model comes remotely close, and Anthropic knows it. That gap is the entire product.
The Official Narrative: “We’re Protecting the Internet”
The Anthropic enterprise-only AI decision is framed through Project Glasswing as a coordinated defensive effort — an attempt to patch the world’s most critical software before capability equivalents proliferate to hostile actors. Anthropic’s official Glasswing page commits $100 million in usage credits and $4 million in direct donations to open-source security organizations, with founding partners that read like a geopolitical alliance: Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, and Palo Alto Networks. Roughly 40 additional organizations maintaining critical software infrastructure also gain access. The initiative’s name — Glasswing, after a butterfly whose transparency makes it nearly invisible — is a metaphor for software vulnerabilities that hide in plain sight.
The security rationale for why Anthropic limited Mythos is not confected. In September 2025, a Chinese state-sponsored threat actor used earlier Claude models in what SecurityWeek documented as the first confirmed AI-orchestrated cyber espionage campaign — not merely using AI as an advisor but deploying it agentically to execute attacks against roughly 30 organizations. If that was possible with Claude’s then-current models, what becomes possible with a model that autonomously chains Linux kernel exploits at a near-perfect success rate?
Anthropic’s Logan Graham, head of the Frontier Red Team, captured the threat succinctly: imagine this level of capability in the hands of Iran in a hot war, or Russia as it attempts to degrade Ukrainian infrastructure. That is not science fiction. It is the calculus driving the controlled release. Briefings to CISA, the Commerce Department, and the Center for AI Standards and Innovation are real, however conspicuously absent the Pentagon remains from those conversations — a pointed omission given Anthropic’s ongoing legal war with the Defense Department over its blacklisting.
So yes: the security case is genuine. But it is, at most, half the story.
The Distillation Flywheel: Why Frontier Labs Are Really Gating Their Best Models
Here is the economic argument that no TechCrunch brief or Bloomberg data point has assembled cleanly: Anthropic model distillation is an existential threat to the frontier lab business model, and Mythos is as much a response to that threat as it is a cybersecurity initiative.
The mathematics of adversarial distillation are brutally asymmetric. Training a frontier model costs approximately $1 billion in compute. Successfully distilling it into a competitive student model costs an adversary somewhere between $100,000 and $200,000 — a 5,000-to-one cost advantage in the favor of the copier. No rate-limiting policy, no terms-of-service clause, and no click-through agreement closes that gap. The only defense is controlling access to the teacher in the first place.
Frontier lab distillation blocking is not a new concern, but 2026 has given it terrifying specificity. Anthropic publicly disclosed in February that three Chinese AI laboratories — DeepSeek, Moonshot AI, and MiniMax — collectively generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts. MiniMax alone accounted for 13 million of those exchanges; Moonshot AI added 3.4 million; DeepSeek, notably, needed only 150,000 because it was targeting something far more specific: how Claude refuses things — alignment behavior, policy-sensitive responses, the invisible architecture of safety. A stripped copy of a frontier model without its alignment training, deployed at nation-state scale for disinformation or surveillance, is the nightmare scenario that animated Anthropic’s founding. It may now be unfolding in real time.
What does this have to do with Mythos being enterprise-only? Everything. A model that autonomously writes working exploits for every major OS would, if released via standard API access, provide Chinese distillation campaigns with not just conversational capability but offensive cyber capability — the very thing that makes Mythos commercially unique. Releasing Mythos at scale would be, simultaneously, the greatest act of market self-destruction and the greatest gift to adversarial state actors in the history of enterprise software. Enterprise-only access eliminates both risks at once: it monetizes the capability at maximum margin while denying it to the distillation ecosystem.
This is the distillation flywheel in action. Frontier labs gate the highest-capability models behind enterprise contracts; enterprises pay premium rates for exclusive capability access; the revenue funds the next generation of training runs; the new model is again too powerful to release openly. Each rotation of the wheel deepens the competitive moat, raises the enterprise price floor, and tightens the grip of the three dominant labs over the global AI stack.
Geopolitics at the Model Layer: The Three-Lab Alliance and the New AI Cold War
The Mythos security exploits announcement arrived within 24 hours of a Bloomberg-reported development that is arguably more consequential for the global technology order: OpenAI, Anthropic, and Google — three companies that have spent the better part of three years competing to annihilate each other — began sharing adversarial distillation intelligence through the Frontier Model Forum. The cooperation, modeled on how cybersecurity firms exchange threat data, represents the first substantive operational use of the Forum since its 2023 founding.
The breakdown of what each Chinese lab extracted from Claude reveals something remarkable: three entirely different product strategies, fingerprinted through their query patterns. MiniMax vacuumed broadly — generalist capability extraction at scale. Moonshot AI targeted the exact agentic reasoning and computer-use stack that its Kimi product has been marketing since late 2025. DeepSeek, with a comparatively tiny 150,000-exchange footprint, was almost exclusively interested in Claude’s alignment layer — how it handles policy-sensitive queries, how it refuses, how it behaves at the edges. Each lab was essentially reverse-engineering not just a model but a business plan.
The MIT research documented in December 2025 found that GLM-series models identify themselves as Claude approximately half the time when queried through certain paths — behavioral residue of distillation that no fine-tuning has fully scrubbed. US officials estimate the financial toll of this campaign in the billions annually. The Trump administration’s AI Action Plan has already called for a formal inter-industry sharing center, essentially institutionalizing what the labs are now doing informally.
The geopolitical stakes here extend far beyond corporate IP. When DeepSeek released its R1 model in January 2025 — a model widely believed to incorporate distilled knowledge from OpenAI’s infrastructure — it erased nearly $1 trillion from US and European tech stocks in a single trading session. Markets now understand something that policymakers are only beginning to grasp: control over frontier AI model capabilities is a form of strategic leverage, and distillation is a vector for transferring that leverage without a single line of export-controlled chip silicon crossing a border.
Enterprise Contracts and the New AI Treadmill
The economics of Anthropic enterprise-only AI are becoming increasingly clear as 2026 revenue data enters the public domain.
| Metric | February 2026 | April 2026 |
|---|---|---|
| Anthropic Run-Rate Revenue | $14B | $30B+ |
| Enterprise Share of Revenue | ~80% | ~80% |
| Customers Spending $1M+ Annually | 500 | 1,000+ |
| Claude Code Run-Rate Revenue | $2.5B | Growing rapidly |
| Anthropic Valuation | $380B | ~$500B+ (IPO target) |
| OpenAI Run-Rate Revenue | ~$20B | ~$24-25B |
Sources: CNBC, Anthropic Series G announcement, Sacra
Anthropic’s annualized revenue has now surpassed $30 billion — having started 2025 at roughly $1 billion — representing one of the most dramatic B2B revenue trajectories in the history of enterprise software. Sacra estimates that 80% of that revenue flows from business clients, with enterprise API consumption and reserved-capacity contracts forming the structural backbone. Eight of the Fortune 10 are now Claude customers. Four percent of all public GitHub commits are now authored by Claude Code.
What Project Glasswing does, in this context, is elegant: it creates a new category of enterprise relationship — not API access, not subscription, but strategic partnership with a frontier safety lab deploying the world’s most capable unrestricted model. The 40 organizations in the Glasswing program are not merely beta testers. They are, from a revenue architecture standpoint, being trained — habituated to Mythos-class capability before it becomes generally available, embedded in their security workflows, their CI/CD pipelines, their vulnerability management systems. By the time Mythos-class models are released at scale with appropriate safeguards, the switching cost will be prohibitive.
This is the AI treadmill: each generation of frontier capability, released exclusively to enterprise partners first, creates a loyalty layer that commoditized open-source alternatives cannot easily displace. The $100 million in Glasswing credits is not charity. It is customer acquisition at an unprecedented model tier.
The Counter-View: Responsible Deployment Has a Principled Case
It would be intellectually dishonest to leave the distillation-flywheel critique standing without challenge. The counter-argument is real, and it deserves full articulation.
Platformer’s analysis makes the most compelling version of the responsible-rollout defense: Anthropic’s founding premise was that a safety-focused lab should be the first to encounter the most dangerous capabilities, so it could lead mitigation rather than react to catastrophe. With Mythos, that appears to be exactly what is happening. The company did not race to monetize these cybersecurity capabilities. It briefed government agencies, convened a defensive consortium, committed $4 million to open-source security projects, and staged rollout behind a coordinated patching effort. The vulnerabilities Mythos found in Firefox, Linux, and OpenBSD are being disclosed and patched before the paper trail of their discovery becomes public — precisely the protocol that responsible security research demands.
Alex Stamos, whose expertise in adversarial security spans decades, offered the optimistic framing: if Mythos represents being “one step past human capabilities,” there is a finite pool of ancient flaws that can now be systematically found and fixed, potentially producing software infrastructure more fundamentally secure than anything achievable through traditional auditing. That is not corporate spin. It is a coherent theory of defensive AI benefit.
The Mythos AI release strategy also reflects a genuinely novel regulatory challenge: the EU AI Act’s next enforcement phase takes effect August 2, 2026, introducing incident-reporting obligations and penalties of up to 3% of global revenue for high-risk AI systems. A general release of Mythos into that environment — without governance infrastructure in place — would be commercially catastrophic as well as potentially harmful. Enterprise-gated release buys time for both the regulatory and technical scaffolding to mature.
What Regulators and Open-Source Advocates Must Do Next
The policy implications of Anthropic Mythos extend far beyond one company’s release strategy. They illuminate a structural shift in how frontier AI capability is being distributed — and by whom, and to whom.
For regulators, the Glasswing model raises questions that existing frameworks cannot answer. If a private company now possesses working zero-day exploits for virtually every major software system on earth — as Kelsey Piper pointedly observed — what obligations of disclosure and oversight apply? The fact that Anthropic is briefing CISA and the Center for AI Standards and Innovation is encouraging, but voluntary briefings are not governance. The EU’s AI Act and the US AI Action Plan both need explicit provisions covering what happens when a commercially controlled lab becomes the de facto custodian of the world’s most significant vulnerability database.
For open-source advocates, the distillation dynamic poses an existential dilemma. The same economic logic that drives labs to gate Mythos also drives them to resist open-weights releases of any model that approaches frontier capability. The three-lab alliance against Chinese distillation is, viewed from a certain angle, also an alliance against open-source proliferation of frontier capability — regardless of the nationality of the developer doing the distilling. Open-source foundations, university research labs, and sovereign AI initiatives in Europe, the Middle East, and South Asia should be pressing hard for access frameworks that allow defensive cybersecurity use of frontier capability without being filtered through the commercial relationships of Silicon Valley.
For enterprise decision-makers, the message is unambiguous: the organizations that embed Mythos-class capability into their vulnerability management workflows now will hold a structural security advantage — measured in patch latency and zero-day coverage — over those that wait for open-source equivalents. But that advantage comes with dependency on a single private entity whose political entanglements, from Pentagon disputes to Chinese state-actor confrontations, introduce supply-chain risks that no CISO should ignore.
Anthropic may well be protecting the internet. It is certainly protecting its empire. In 2026, those two imperatives have become so entangled that distinguishing them may be the most important work left for anyone who cares about who controls the infrastructure of the digital world.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
AI
Anthropic Rolls Out Its Most Powerful Cyber AI Model — Days After Leaking Its Own Source Code
The launch of Claude Mythos Preview and Project Glasswing, mere days after Anthropic accidentally exposed 512,000 lines of its core product’s source code to the world, is either the most audacious act of strategic redirection in Silicon Valley history — or the most revealing window yet into the contradictions at the heart of frontier AI development.
There is a particular species of Silicon Valley irony that only manifests at the very frontier of technological ambition. On March 31st, 2026, an Anthropic employee made a mistake so elementary it would embarrass a first-year computer science undergraduate: a debug source map file was accidentally bundled into a public software release, pointing to a cloud-hosted archive of the company’s most commercially prized product — the source code of Claude Code, its flagship agentic coding assistant. Within hours, 512,000 lines of proprietary TypeScript code, across 1,906 files, were mirrored, forked, and torrent-distributed across the internet, never to be recalled. The repository on GitHub was forked more than 41,500 times before Anthropic could blink. Then, seven days later, Anthropic announced the most capable AI model it has ever built — a cybersecurity behemoth called Claude Mythos Preview — and launched Project Glasswing, a sweeping initiative to secure the world’s critical digital infrastructure. The company publicly described it as a watershed for global security. A watching world could be forgiven for raising an eyebrow.
History rarely serves up irony quite this rich. The firm that accidentally handed a blueprint of its proprietary agent harness to thousands of developers, threat actors, and competitors — the firm that inadvertently revealed the internal codename of its most powerful unreleased model buried in that same code — emerged days later as the standard-bearer for a new era of AI-powered cyber defence. It is, depending on your interpretation, either a masterclass in narrative control or a deeply unsettling indicator of the structural tensions now embedded in the development of frontier AI.
I. A Double Embarrassment: The Anatomy of the Leak
The facts of the Anthropic source code leak are simultaneously mundane and extraordinary. On the morning of March 31st, 2026, Anthropic pushed version 2.1.88 of its @anthropic-ai/claude-code package to the npm public registry. Buried inside was a 59.8-megabyte JavaScript source map file — a developer debugging tool that, when followed to its reference URL on Anthropic’s own Cloudflare R2 storage bucket, yielded a downloadable zip archive of the complete, unobfuscated TypeScript source for Claude Code.
Security researcher Chaofan Shou, an intern at Solayer Labs, spotted the exposure at 4:23 AM Eastern and posted a direct download link on X. It was, as The Register reported, “a mistake as bad as leaving a map file in a publish configuration” — a single misconfigured .npmignore field. A known bug in Bun, the JavaScript runtime Anthropic had acquired in late 2025, had been causing source maps to ship in production builds for twenty days before the incident. Nobody caught it.
This was, in fact, the second major accidental disclosure of the month. Days earlier, Fortune had reported on a separate leak of nearly 3,000 files from a misconfigured content management system — including a draft blog post describing a forthcoming model described internally as “by far the most powerful AI model” Anthropic had ever developed. That model’s codename: Mythos. Also, apparently: Capybara.
The March–April 2026 Anthropic Disclosure Timeline
| Date | Event |
|---|---|
| ~Late March 2026 | Fortune reports on ~3,000 leaked CMS files; first public confirmation of the Mythos model’s existence and capabilities. |
| March 31, 2026 | Claude Code v2.1.88 ships to npm with embedded source map; 512,000 lines of TypeScript exposed within hours. GitHub repository forked 41,500+ times. |
| March 31 – April 6 | Anthropic issues DMCA takedowns; threat actors seed trojanized forks with backdoors and cryptominers. Axios supply-chain attack occurs simultaneously. |
| April 7, 2026 | Anthropic officially announces Claude Mythos Preview and Project Glasswing. Partners include Apple, Microsoft, Google, Amazon, JPMorgan Chase, and others. |
What the leaked source revealed was considerable: 44 hidden feature flags for unshipped capabilities, a sophisticated three-layer memory architecture, the internal orchestration logic for autonomous “daemon mode” background agents, and — critically — confirmation that a model called Capybara was actively being readied for launch. The VentureBeat analysis noted that Claude Code had achieved an annualised recurring revenue run rate of $2.5 billion by March 2026, making the intellectual property exposure a genuinely material event for a company preparing to go public.
II. Claude Mythos Preview and Project Glasswing: A Technical Step-Change
To understand why the timing of the Mythos announcement matters, one must first grasp the scale of what Anthropic is claiming. Claude Mythos Preview is not a marginal improvement on its predecessors. It occupies, in Anthropic’s internal taxonomy, a fourth tier entirely above the existing Haiku–Sonnet–Opus range — a tier the company internally designates “Copybara.” According to SecurityWeek, it represents “not an incremental improvement but a step change in performance.”
The headline claim is breathtaking in its scope. In the weeks prior to the public announcement, Anthropic ran Mythos against real open-source codebases and, according to its own Project Glasswing announcement, the model identified thousands of zero-day vulnerabilities — flaws previously unknown to software maintainers — across every major operating system and every major web browser. The oldest vulnerability it uncovered was a 27-year-old bug in OpenBSD, a system famous for its security record. A 16-year-old flaw in video processing software survived five million automated test attempts before Mythos found it in a matter of hours. The model autonomously chained together a series of Linux kernel vulnerabilities into a privilege escalation exploit — the kind of attack chain that would previously have required a sophisticated, nation-state-grade human research team.
A single AI agent could scan for vulnerabilities and potentially take advantage of them faster and more persistently than hundreds of human hackers — and similar capabilities will be available across the industry in as little as six months.
The Axios reporting on the rollout puts the dual-use risk with uncomfortable clarity: Mythos is “extremely autonomous” and possesses the reasoning capabilities of an advanced security researcher, capable of finding “tens of thousands of vulnerabilities” that even elite human bug hunters would miss. This is precisely why Anthropic chose not to release it publicly. Instead, Project Glasswing gives curated preview access to 40-plus organisations responsible for critical software infrastructure — including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks — backed by up to $100 million in usage credits and $4 million in direct donations to open-source security organisations including the Apache Software Foundation and OpenSSF.
The model is not cybersecurity-specific. CNBC noted that Mythos’s cyber prowess is a downstream consequence of its exceptional general-purpose coding and reasoning capabilities — a distinction with profound regulatory implications. You cannot restrict a model trained to think brilliantly about code from thinking brilliantly about vulnerabilities in that code.
III. The Deeper Meaning: Irony, Competence, and the New Security Paradigm
The central paradox demands direct engagement: Anthropic, a company whose founding proposition is responsible AI development, leaked its own product’s source code through a packaging error so elementary it required no sophistication to exploit. It then, within the same news cycle, announced an AI model so powerful its own CEO fears its public release — and positioned itself as the primary steward of global cyber defence. One is entitled to hold both thoughts simultaneously.
And yet the strategic coherence of the Mythos launch, viewed against the backdrop of the leak, is hard to dismiss entirely. Anthropic did not choose the timing. The Mythos project had been in development and partner testing for weeks before the Claude Code source code escaped its containment. But the company, having already suffered the reputational bruise of one accidental exposure too many, had an imperative to seize the narrative — to move from embarrassed leaker to principled guardian, rapidly. The result is a masterclass in what crisis communications professionals call “agenda replacement.”
The deeper issue, however, is structural and it transcends any single company. The Axios assessment is stark: Mythos is “the first AI model that officials believe is capable of bringing down a Fortune 100 company, crippling swaths of the internet or penetrating vital national defense systems.” Meanwhile, the head of Anthropic’s frontier red team, Logan Graham, told multiple outlets that comparable capabilities will be in the hands of the broader AI industry within six to eighteen months — from every nation with frontier ambitions, not just the United States. The window for getting ahead of this threat is not a decade. It is, at most, a year.
What the Mythos launch crystallises is a principle that the cybersecurity community has long understood but that corporate AI leaders and policymakers have been reluctant to internalise: the same model property that makes an AI system valuable for defence makes it catastrophically useful for offence. The technical writeup on Anthropic’s red team blog makes this explicit. Mythos can “reverse-engineer exploits on closed-source software” and turn known-but-unpatched vulnerabilities into working exploits. Gadi Evron, founder of AI security firm Knostic, told CNN that “attack capabilities are available to attackers and defenders both, and defenders must use them if they’re to keep up.” There is no asymmetry available — only the question of who moves first.
IV. The Geopolitical and Regulatory Reckoning
The implications of Anthropic Mythos extend well beyond corporate strategy. The U.S.-China AI competition has already entered the domain of active cyber operations. A Chinese state-sponsored group, as Fortune reported, used an earlier Claude model to target approximately 30 organisations in a coordinated espionage campaign before Anthropic detected and curtailed the activity. If a Claude model that predates Mythos by several capability generations was sufficient to mount a significant intelligence operation, the implications of Mythos-class capability in hostile hands are genuinely alarming.
A source briefed on Mythos told Axios: “An enemy could reach out and touch us in a way they can’t or won’t with kinetic operations. For most Americans, a conventional conflict is ‘over there.’ With a cyberattack, it’s right here.” This framing matters. The doctrine of nuclear deterrence rested partly on the difficulty of acquisition. The doctrine of cyber deterrence in the Mythos era rests on nothing — the marginal cost of deploying AI-accelerated attack capability approaches zero for any state or non-state actor with API access to a comparable model.
Anthropic’s relationship with Washington is, to put it diplomatically, complicated. The company is simultaneously briefing the Cybersecurity and Infrastructure Security Agency, the Commerce Department, and senior officials across the federal government on Mythos’s capabilities — while locked in active litigation with the Pentagon, which has labelled Anthropic a supply-chain risk following the company’s refusal to permit autonomous targeting or battlefield surveillance applications. The AI safety firm that declined to arm American drones is now, in the same breath, offering American critical infrastructure a first-mover advantage against AI-powered adversaries. The philosophical coherence of this position is defensible; its political navigation will be considerably harder.
For regulators, the Mythos announcement poses a question for which existing frameworks have no satisfying answer. The EU AI Act’s tiered risk classifications were not designed for a model that is simultaneously a breakthrough productivity tool, a national security asset, and a potential weapon of mass cyber-disruption. The Project Glasswing model — voluntary, industry-led, access-gated — is a plausible short-term mechanism. It is not a durable regulatory framework. And as Logan Graham made clear, the window before other frontier labs — and the Chinese state — reach comparable capability is measured in months, not years.
V. Verdict: A Reckoning Dressed as a Launch
Editorial Assessment
The Mythos announcement is not primarily a product launch. It is a reckoning — one that Anthropic has had the narrative dexterity to package as a strategic initiative rather than a confession. The source code leak was, at the level of operational security, an embarrassment of the first order. But it was also, unintentionally, a proof of concept for the vulnerability landscape that Mythos was built to address. Anthropic’s own systems failed a test far simpler than any that Mythos could conceivably pose to a determined adversary.
That irony is not merely cosmetic. It is instructive. No organisation — not even a frontier AI lab whose entire value proposition rests on the responsible management of powerful systems — is immune to the mundane failure modes of human error, toolchain misconfiguration, and the accumulated technical debt of moving too fast. The question is not whether Anthropic can be trusted with Mythos. The question is whether any institution, in any country, is structurally capable of managing the governance of AI capabilities that are advancing faster than the legal and regulatory architectures designed to contain them.
Dario Amodei framed the Project Glasswing rollout as an opportunity to “create a fundamentally more secure internet and world than we had before the advent of AI-powered cyber capabilities.” This is not rhetorical excess. It is, technically, accurate: the same capability that can chain together a 27-year-old kernel vulnerability into a privilege escalation exploit can, in the hands of defenders, systematically eliminate such vulnerabilities from the world’s most important software. The question is not whether this technology is transformative. It is whether the institutional infrastructure required to ensure that transformation benefits defenders more than attackers can be assembled in the time available.
Six months. Eighteen at the outside. That is the horizon Logan Graham has placed on the proliferation of Mythos-class capabilities across the industry. The global financial cost of cybercrime already runs to an estimated $500 billion annually, a figure that was compiled before any model approached Mythos’s level of autonomous vulnerability discovery. Policymakers in Washington, Brussels, and Beijing who are not currently treating this as an emergency are, as one source briefed on Mythos told Axios with commendable directness, “not remotely ready.”
Anthropic rolled out its most powerful cyber AI model days after leaking its own source code. The irony is real. So is the threat. And so, potentially, is the opportunity — if the institutions responsible for governing it can move at the speed the technology demands, rather than the speed at which governments customarily prefer to operate. History suggests that gap will be considerable. The Mythos timeline suggests that gap may, for once, be decisive.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
AI
Perplexity’s $450M Pivot Changes Everything
Perplexity’s ARR surged past $450M in March 2026 after a 50% monthly jump, driven by its AI agent “Computer.” Here’s what this pivot means for Google, OpenAI, and the future of the internet.
How a search upstart quietly rewired the economics of AI — and why the rest of Silicon Valley should be paying very close attention
There is a phrase that haunts every incumbent technology company: silent pivot. Not the public declaration of reinvention, draped in keynote slides and press releases, but the quiet moment when a company stops doing the thing you thought it did — and starts doing the thing that will eventually eat you alive.
Perplexity AI has just executed one of those pivots. And the numbers suggest it is working with a speed that should alarm everyone from Mountain View to Redmond.
Perplexity’s estimated annual recurring revenue rose to more than $450 million in March, after the launch of a new agent tool and a shift to usage-based pricing. Investing.com That figure represents a 50% jump in a single month — a rate of acceleration that, even in an industry accustomed to hyperbolic growth curves, demands serious analytical attention. This is not a company finding its feet in a niche. This is a company stepping onto a stage it intends to own.
From Answers to Actions: What “Computer” Actually Changes
To understand why this revenue surge matters, you need to understand what Perplexity has actually built — and why it is architecturally different from everything that came before it.
On February 25, 2026, Perplexity launched “Computer,” a multi-model AI agent that coordinates 19 different AI models to complete complex, multi-step workflows entirely in the background. This is not another chat tool that produces quick answers — it is a full-blown agentic AI system, a digital worker that takes a user’s goal, breaks it into steps, spins up specialized sub-agents, and keeps running until the job is done. Build Fast with AIMedium
The strategic architecture here is genuinely novel. Computer functions as what Perplexity describes as “a general-purpose digital worker” — a system that accepts a high-level objective, decomposes it into subtasks, and delegates those subtasks to whichever AI model is best suited for each one. VentureBeat Anthropic’s Claude Opus 4.6 serves as the core reasoning engine. Google’s Gemini handles deep research. OpenAI’s GPT-5.2 manages long-context recall. Each sub-task routes to the best available model, automatically.
This is not a feature. It is a philosophy — and the philosophy has a name: model-agnostic orchestration. Perplexity is betting that no single AI provider will dominate every cognitive capability, and that the company best positioned to win the next decade is the one that can route across all of them intelligently.
The bet appears to be paying off. Perplexity’s own internal data supports this thesis: the company’s enterprise usage shifted dramatically over the past year, from 90% of queries routing to just two models in January 2025, to no single model commanding more than 25% of usage by December 2025. VentureBeat
The Pricing Revolution Hidden Inside the Revenue Story
It would be tempting to read the $450 million ARR headline as a simple user-growth story. It is not. The more consequential development is what Perplexity has done to its pricing architecture — and the implications that has for the entire AI industry’s business model.
The $200 monthly Max tier includes the Computer agent itself, 10,000 monthly credits, unlimited Pro searches, access to advanced models including GPT-5.2 and Claude Opus 4.6, Sora 2 Pro video generation, the Comet AI browser, and unlimited Labs usage. SentiSight.ai At the enterprise tier, the price rises to $325 per seat per month.
This is usage-based pricing in its most sophisticated form — not a flat subscription for access, but a credit system that scales revenue with the actual work performed. The economic logic is powerful: the more value an agent delivers, the more credits it consumes, and the more the customer pays. Revenue becomes proportional to outcomes, not to logins.
This represents a fundamental rupture with the advertising model that has funded the internet for three decades. Google monetizes attention. Perplexity is building a business that monetizes completion — the successful execution of a task. These are not subtle variants of the same model. They are philosophically opposed.
Perplexity has significantly expanded its pricing structure in 2026, with the platform now spanning five subscription tiers — Free, Pro, Max, Enterprise Pro, and Enterprise Max — alongside a developer API ecosystem that includes the Sonar API, Search API, and the newer Agentic Research API. Finout The Agentic Research API, in particular, positions Perplexity not just as a consumer product but as foundational AI infrastructure for any developer who wants to build on top of agent-grade search.
The Google Problem, Sharpened
Search incumbency has always been more durable than technologists predicted, for a simple reason: the switching cost for a behavior performed forty times a day is enormous. Perplexity, in its original form as an “answer engine,” was trying to change a habit. Now it is trying to eliminate a category.
When a Perplexity agent builds you a Bloomberg Terminal-style financial dashboard from scratch, or automates a full content production workflow over three days without requiring a single manual search query, the question of whether it is “better than Google” becomes irrelevant. The agent is doing something Google was never designed to do. It is not competing for your search box. It is competing for your workday.
Perplexity now has more than 100 million monthly active users from its search and agent tools, including tens of thousands of enterprise clients. Investing.com That enterprise penetration is the telling number. Consumer search habits die slowly; enterprise procurement cycles move when ROI is demonstrable. The fact that enterprise customers are already embedding Perplexity’s agents into production workflows suggests the value proposition has moved well beyond novelty.
More than 100 enterprise customers contacted Perplexity over a single weekend demanding access after seeing early user demonstrations on social media — users on social media demonstrated the agent building Bloomberg Terminal-style financial dashboards, replacing six-figure marketing tool stacks in a single weekend, and automating workflows that previously required dedicated teams. VentureBeat
That is not a product demo going viral. That is product-market fit, documented in real time.
Competitive Positioning: Where Perplexity Sits in the New AI Stack
The $450 million ARR figure needs to be read against the broader competitive landscape — and here, the picture becomes more interesting, and more dangerous for Perplexity’s rivals.
OpenAI’s Operator and Anthropic’s Claude Cowork both represent agent-layer ambitions from the model providers themselves. Microsoft Copilot brings enterprise distribution at a scale Perplexity cannot match organically. Google’s own agentic ambitions are embedded across its entire product surface. Against this array of well-resourced competitors, Perplexity’s advantages are specific and worth understanding precisely.
First: model neutrality. Neither OpenAI nor Google will ever build a genuine orchestration layer that routes work to a competitor’s model. Perplexity has no such constraint. Its Computer agent already orchestrates Claude, GPT, Gemini, Grok, and others simultaneously. For enterprises that want best-of-breed reasoning rather than vendor lock-in, that neutrality is structurally valuable.
Second: search heritage. Perplexity now serves about 30 million monthly users and processed 780 million queries in May 2025 — more than 20% month-over-month growth — feeding a data flywheel that sharpens search relevance and agent targeting. Sacra Every query is a training signal. An agent that understands how real professionals actually search has a compounding advantage over agents that are parachuted in from a model laboratory.
Third: distribution velocity. Sacra projected Perplexity would reach $656 million in ARR by the end of 2026 Sacra — a target that now looks not just achievable but potentially conservative, given the March surge to $450 million. The question is no longer whether Perplexity can scale. It is whether it can maintain pricing power as competitors intensify.
The Publisher Dimension: A Redistribution of Value Worth Watching
One underreported dimension of the Perplexity story is its relationship with the media and publishing ecosystem — a relationship that has been contentious, but is evolving in ways that may prove prescient.
Publishers have, with some justification, worried that AI search engines extract the value of their journalism without adequately compensating them. Perplexity has responded with a revenue-sharing program and formal content partnerships, signaling an intent to build an ecosystem rather than simply scrape one.
Perplexity announced a $42.5 million fund to share AI search revenue with publishers, reflecting an investment in ecosystem partnerships. Blogs If agentic AI becomes the dominant interface through which people consume information and execute tasks, the entity that controls the citation layer — the sourcing infrastructure of AI outputs — will hold extraordinary leverage. Perplexity is positioning itself as that entity’s steward.
This is an audacious bet. It may also be a necessary one. A sustainable AI search economy requires content creators to keep creating. A company that figures out how to share value equitably with its content suppliers will have a structural advantage over one that treats the web as a free resource.
The Risks That the Revenue Surge Cannot Hide
Intellectual honesty demands acknowledging what the $450 million figure does not tell us.
The credit-based pricing model, while economically elegant, introduces revenue variability that flat subscriptions do not. Perplexity has not published a per-task credit conversion table — there is no page that says a research task costs X credits, making budgeting difficult for heavy users. Trysliq At the enterprise level, opacity in pricing is a trust problem. CFOs who cannot model their AI spend will negotiate hard caps or find vendors who offer predictability.
There is also the trust question that underlies Perplexity’s entire enterprise push. The company is three years old and asking chief information security officers to route sensitive Snowflake data, legal contracts, and proprietary business intelligence through its platform. VentureBeat In highly regulated industries — finance, healthcare, law — that ask may be a bridge too far in 2026, regardless of the technology’s capability.
And then there is the litigation risk. Amazon filed suit against Perplexity on November 4, 2025, over the startup’s agentic shopping features in the Comet browser, arguing that automated agents must identify themselves and comply with site rules. Sacra As agents begin operating across the open web at scale, the legal frameworks governing their behaviour are still being written. The company moving fastest is also the one most exposed to adverse precedent.
The Bigger Question: Is This the Moment AI Agents Become the New Interface?
Strip away the funding rounds, the valuation multiples, and the competitive posturing, and the Perplexity story is really about a single hypothesis: that the next dominant interface for human-computer interaction will not be a search box, a browser, or a chat window. It will be a goal.
You describe an outcome. The agent handles everything else.
A February 2026 survey by CrewAI found that 100% of surveyed enterprises plan to expand their use of agentic AI this year, with 65% already using AI agents in production and organizations reporting they have automated an average of 31% of their workflows. Fortune Business Insights projects the global agentic AI market will grow from $9.14 billion in 2026 to $139 billion by 2034. VentureBeat
Those numbers should not be taken as gospel — market projection firms have a well-documented tendency to extrapolate peak enthusiasm into perpendicular lines on a chart. But the directional signal is clear. Enterprises are not experimenting with agents. They are deploying them.
Perplexity’s 50% monthly revenue jump is, on one reading, a company hitting a product-market fit inflection point. On a larger reading, it is a leading indicator of an industry-wide shift in how organizations will structure cognitive work. When knowledge workers stop searching and start delegating, the companies that built the infrastructure for that delegation will be worth considerably more than their current valuations suggest.
A Quotable Close
The history of technology is punctuated by moments when a product category collapses into a feature — and a feature expands into a platform. The search box was a feature of the browser. The browser became a platform for the web. The web became the substrate for the cloud.
Aravind Srinivas is betting that the agent layer will perform the same architectural alchemy: absorbing search, absorbing browsers, absorbing the application stack above them, and emerging as the new interface through which people and organizations interact with information, services, and each other.
A 50% monthly revenue jump to $450 million is not proof that he is right. But it is the most compelling evidence yet that the bet is live — and that the clock, for every company that still depends on attention as its primary product, has started.
The next billion-dollar question in technology is not “who builds the best AI model?” It is “who builds the best layer between the human and all the models?” Perplexity, right now, has the most credible answer.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
-
Markets & Finance3 months agoTop 15 Stocks for Investment in 2026 in PSX: Your Complete Guide to Pakistan’s Best Investment Opportunities
-
Analysis2 months agoBrazil’s Rare Earth Race: US, EU, and China Compete for Critical Minerals as Tensions Rise
-
Analysis2 months agoTop 10 Stocks for Investment in PSX for Quick Returns in 2026
-
Banks3 months agoBest Investments in Pakistan 2026: Top 10 Low-Price Shares and Long-Term Picks for the PSX
-
Investment3 months agoTop 10 Mutual Fund Managers in Pakistan for Investment in 2026: A Comprehensive Guide for Optimal Returns
-
Global Economy4 months agoPakistan’s Export Goldmine: 10 Game-Changing Markets Where Pakistani Businesses Are Winning Big in 2025
-
Asia3 months agoChina’s 50% Domestic Equipment Rule: The Semiconductor Mandate Reshaping Global Tech
-
Global Economy4 months ago15 Most Lucrative Sectors for Investment in Pakistan: A 2025 Data-Driven Analysis
