AI
The Price of Algorithmic War: How AI Became the New Dynamite in the Middle East
The Iran conflict has turned frontier AI models into contested weapons of state — and the financial and human fallout is only beginning to register.
In the first eleven days of the U.S.-Israeli offensive against Iran, which began on February 28, 2026, American and Israeli forces executed roughly 5,500 strikes on Iranian targets. That is an operational tempo that would have required months in any previous conflict — made possible, in significant part, by artificial intelligence. In the first eleven days of the conflict, America achieved an astonishing 5,500 strikes, using AI on a large-scale battlefield for the first time at this scale. The National The same week those bombs fell, a legal and commercial crisis erupted in Silicon Valley with consequences that will define the AI industry for years. Both events are part of the same story.
We are living through the moment when AI ceased being a future-war thought experiment and became an operational reality — embedded in targeting pipelines, shaping intelligence assessments, and now at the center of a constitutional showdown between a frontier AI company and the United States government. Alfred Nobel, who invented dynamite and then spent the remainder of his life in tortured ambivalence about it, would have recognized the pattern immediately.
The Kill Chain, Accelerated
The joint U.S. and Israeli offensive on Iran revealed how algorithm-based targeting and data-driven intelligence are reforming the mechanics of warfare. In the first twelve hours alone, U.S. and Israeli forces reportedly carried out nearly 900 strikes on Iranian targets — an operational tempo that would have taken days or even weeks in earlier conflicts. Interesting Engineering
At the technological center of this acceleration sits a system most Americans have never heard of: Project Maven. Anthropic’s Claude has become a crucial component of Palantir’s Maven intelligence analysis program, which was also used in the U.S. operation to capture Venezuelan President Nicolás Maduro. Claude is used to help military analysts sort through intelligence and does not directly provide targeting advice, according to a person with knowledge of Anthropic’s work with the Defense Department. NBC News This is a distinction with genuine moral weight — between decision-support and decision-making — but one that is becoming harder to sustain at the speed at which modern targeting now operates.
Critics warn that this trend could compress decision timelines to levels where human judgment is marginalized, ushering in an era of warfare conducted at what has been described as “faster than the speed of thought.” This shortening interval raises fears that human experts may end up merely approving recommendations generated by algorithms. In an environment dictated by speed and automation, the space for hesitation, dissent, or moral restraint may be shrinking just as quickly. Interesting Engineering
The U.S. military’s posture has been notably sanguine about these concerns. Admiral Brad Cooper, head of U.S. Central Command, confirmed that AI is helping soldiers process troves of data, stressing that humans make final targeting decisions — but critics note the gap between that principle and verifiable practice remains wide. Al Jazeera
The Financial Architecture of AI Warfare
The economic dimensions of this transformation are substantial and largely unreported in their full complexity. Understanding them requires holding three separate financial narratives simultaneously.
The direct contract market is the most visible layer. Over the past year, the U.S. Department of Defense signed agreements worth up to $200 million each with several major AI companies, including Anthropic, OpenAI, and Google. CNBC These are not trivial sums in isolation, but they represent the seed capital of a much larger transformation. The military AI market is projected to reach $28.67 billion by 2030, as the speed of military decision-making begins to surpass human cognitive capacity. Emirates 24|7
The collateral economic disruption is less discussed but potentially far larger. On March 1, Iranian drone strikes took out three Amazon Web Services facilities in the Middle East — two in the UAE and one in Bahrain — in what appear to be the first publicly confirmed military attacks on a hyperscale cloud provider. The strikes devastated cloud availability across the region, affecting banks, online payment platforms, and ride-hailing services, with some effects felt by AWS users worldwide. The Motley Fool The IRGC cited the data centers’ support for U.S. military and intelligence networks as justification. This represents a strategic escalation that no risk-management framework in the technology sector adequately anticipated: cloud infrastructure as a legitimate military target.
The reputational and legal costs of AI’s battlefield role may ultimately dwarf both. Anthropic’s court filings stated that the Pentagon’s supply-chain designation could cut the company’s 2026 revenue by several billion dollars and harm its reputation with enterprise clients. A single partner with a multi-million-dollar contract has already switched from Claude to a competing system, eliminating a potential revenue pipeline worth more than $100 million. Negotiations with financial institutions worth approximately $180 million combined have also been disrupted. Itp
The Anthropic-Pentagon Fracture: A Defining Test
The dispute between Anthropic and the U.S. Department of Defense is not merely a contract negotiation gone wrong. It is the first high-profile case in which a frontier AI company drew a public ethical line — and then watched the government attempt to destroy it for doing so.
The sequence of events is now well-documented. The administration’s decisions capped an acrimonious dispute over whether Anthropic could prohibit its tools from being used in mass surveillance of American citizens or to power autonomous weapon systems, as part of a military contract worth up to $200 million. Anthropic said it had tried in good faith to reach an agreement, making clear it supported all lawful uses of AI for national security aside from two narrow exceptions. NPR
When Anthropic held its position, the response was unprecedented in the annals of U.S. technology policy. Defense Secretary Pete Hegseth declared Anthropic a supply chain risk in a statement so broad that it can only be seen as a power play aimed at destroying the company. Shortly thereafter, OpenAI announced it had reached its own deal with the Pentagon, claiming it had secured all the safety terms that Anthropic sought, plus additional guardrails. Council on Foreign Relations
In an extraordinary move, the Pentagon designated Anthropic a supply chain risk — a label historically only applied to foreign adversaries. The designation would require defense vendors and contractors to certify that they don’t use the company’s models in their work with the Pentagon. CNBC That this was applied to a U.S.-headquartered company, founded by former employees of a U.S. nonprofit, and valued at $380 billion, represents a remarkable inversion of the logic the designation was designed to serve.
Meanwhile, Washington was attacking an American frontier AI leader while Chinese labs were on a tear. In the past month alone, five major Chinese models dropped: Alibaba’s Qwen 3.5, Zhipu AI’s GLM-5, MiniMax’s M2.5, ByteDance’s Doubao 2.0, and Moonshot’s Kimi K2.5. Council on Foreign Relations The geopolitical irony is not subtle: in punishing a safety-focused American AI company, the administration may have handed Beijing its most useful competitive gift of the year.
The Human Cost: Social Ramifications No Algorithm Can Compute
Against the financial ledger, the humanitarian accounting is staggering and still incomplete.
The Iranian Red Crescent Society reported that the U.S.-Israeli bombardment campaign damaged nearly 20,000 civilian buildings and 77 healthcare facilities. Strikes also hit oil depots, several street markets, sports venues, schools, and a water desalination plant, according to Iranian officials. Al Jazeera
The case that has attracted the most scrutiny is the bombing of the Shajareh Tayyebeh elementary school in Minab, southern Iran. A strike on the school in the early hours of February 28 killed more than 170 people, most of them children. More than 120 Democratic members of Congress wrote to Defense Secretary Hegseth demanding answers, citing preliminary findings that outdated intelligence may have been to blame for selecting the target. NBC News
The potential connection to AI decision-support systems is explored with forensic precision by experts at the Bulletin of the Atomic Scientists. One analysis notes that the mistargeting could have stemmed from an AI system with access to old intelligence — satellite data that predated the conversion of an IRGC compound into an active school — and that such temporal reasoning failures are a known weakness of large language models. Even with humans nominally “in the loop,” people frequently defer to algorithmic outputs without careful independent examination. Bulletin of the Atomic Scientists
The social fallout extends well beyond individual atrocities. Israel’s Lavender AI-powered database, used to analyze surveillance data and identify potential targets in Gaza, was wrong at least 10 percent of the time, resulting in thousands of civilian casualties. A recent study found that AI models from OpenAI, Anthropic, and Google opted to use nuclear weapons in simulated war games in 95 percent of cases. Rest of World The simulation result does not predict real-world behavior, but it reveals how strategic reasoning models can default toward extreme outcomes under pressure — a finding that ought to unsettle anyone who imagines that algorithmic warfare is inherently more precise than the human kind.
The corrosion of accountability is perhaps the most insidious long-term social effect. “There is no evidence that AI lowers civilian deaths or wrongful targeting decisions — and it may be that the opposite is true,” says Craig Jones, a political geographer at Newcastle University who researches military targeting. Nature Yet the speed and opacity of AI-assisted operations makes it exponentially harder to assign responsibility when things go wrong. Algorithms do not face courts-martial.
Governance: The International Gap
Rapid technological development is outpacing slow international discussions. Academics and legal experts meeting in Geneva in March 2026 to discuss lethal autonomous weapons systems found themselves studying a technology already being used at scale in active conflicts. Nature The gap between the pace of deployment and the pace of governance has never been wider.
The Middle East and North Africa are arguably the most conflict-ridden and militarized regions in the world, with four out of eleven “extreme conflicts” identified in 2024 by the Armed Conflict Location and Event Data organization occurring there. The region has become a testing ground for AI warfare whose lessons — and whose errors — will shape every future conflict. War on the Rocks
The legal framework governing AI in warfare remains, generously described, aspirational. The U.S. military’s stated commitment to keeping “humans in the loop” is a principle that has no internationally binding enforcement mechanism, no agreed definition of what meaningful human control actually entails, and no independent auditing process. One expert observed that the biggest danger with AI is when humans treat it as an all-purpose solution rather than something that can speed up specific processes — and that this habit of over-reliance is particularly lethal in a military context. The National
AI as the New Dynamite: Nobel’s Unresolved Legacy
When Alfred Nobel invented dynamite in 1867, he believed — genuinely — that a weapon so devastatingly efficient would make war unthinkably costly and therefore rare. He was catastrophically wrong. The Franco-Prussian War, the First World War, and the entire industrial-era atrocity that followed proved that more powerful weapons do not deter wars; they escalate them, and they increase civilian mortality relative to combatant casualties.
The parallel to AI is not decorative. The argument for AI in warfare — that algorithmic precision reduces collateral damage, that faster targeting shortens conflicts, that autonomous systems absorb military risk that would otherwise fall on human soldiers — is structurally identical to Nobel’s argument for dynamite. It is the rationalization of a dual-use technology by those with an interest in its proliferation.
Drone technology in the Middle East has already shifted from manual control toward full autonomy, with “kamikaze” drones utilizing computer vision to strike targets independently if communications are severed. As AI becomes more integrated into militaries, the advancements will become even more pronounced with “unpredictable, risky, and lethal consequences,” according to Steve Feldstein, a senior fellow at the Carnegie Endowment for International Peace. Rest of World
The Anthropic dispute, whatever its ultimate legal resolution, has surfaced a question that Silicon Valley has been able to defer until now: can a technology company that builds frontier AI models — systems capable of synthesizing intelligence, generating targeting assessments, and running strategic simulations — genuinely control how those systems are used once deployed by a state? As OpenAI’s own FAQ acknowledged when asked what would happen if the government violated its contract terms: “As with any contract, we could terminate it.” The entire edifice of AI safety in warfare, for now, rests on the contractual leverage of companies that have already agreed to participate. Council on Foreign Relations
Nobel at least had the decency to endow prizes. The AI industry is still working out what it owes.
Policy Recommendations
A minimally adequate governance framework for AI in warfare would need to accomplish several things. Independent verification of “human in the loop” claims — not merely the assertion of it — is the essential starting point. Mandatory after-action reporting on AI involvement in any strike that results in civilian casualties would create accountability where none currently exists. International agreement on a baseline error-rate threshold — above which AI targeting systems may not be used without additional human review — would translate abstract humanitarian law into operational reality.
The technology companies themselves bear responsibility that no contract clause can fully discharge. Researchers from OpenAI, Google DeepMind, and other labs submitted a court filing supporting Anthropic’s position, arguing that restrictions on domestic surveillance and autonomous weapons are reasonable until stronger legal safeguards are established. ColombiaOne That the most capable AI builders in the world believe their own technology is not yet reliable enough for autonomous lethal use is information that should be at the center of every policy debate — not buried in court filings.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
Analysis
Global AI Regulation UN 2026: Why the World Needs an Oversight Body Now
The machines are already choosing who dies. The question is whether humanity will choose to stop them.
In the early weeks of Israel’s military campaign in Gaza, a targeting system called Lavender quietly changed the nature of modern warfare. The Israeli army marked tens of thousands of Gazans as suspects for assassination using an AI targeting system with limited human oversight and a permissive policy for civilian casualties. +972 Magazine Israeli intelligence officials acknowledged an error rate of around 10 percent — but simply priced it in, deeming 15 to 20 civilian deaths acceptable for every junior militant the algorithm identified, and over 100 for commanders. CIVICUS LENS The machine, according to one Israeli intelligence officer cited in the original +972 Magazine investigation, “did it coldly.”
This is not a hypothetical future threat. This is 2026. And this is why global AI regulation under the United Nations — a binding, enforceable, internationally backed governance platform — is no longer a matter of philosophical debate. It is the defining policy emergency of our era.
Why the Global AI Regulation UN Framework Is the Most Urgent Issue of 2026
When historians eventually write the account of humanity’s encounter with artificial intelligence, they will mark 2026 as the year the world stood at the threshold and hesitated. UN Secretary-General António Guterres affirmed in early February 2026: “AI is moving at the speed of light. No country can see the full picture alone. We need shared understandings to build effective guardrails, unlock innovation for the common good, and foster cooperation.” United Nations Foundation
That statement, measured and diplomatic in tone, barely captures the urgency on the ground. From the rubble of Gaza to the drone corridors above eastern Ukraine, algorithmic warfare has become normalized with terrifying speed. The Future of Life Institute now tracks approximately 200 autonomous weapons systems deployed across Ukraine, the Middle East, and Africa Globaleducationnews — the majority operating in legal and regulatory voids that no international treaty has yet filled.
Meanwhile, the governance architecture intended to respond to this moment remains fragile and fragmented. Just seven countries — all from the developed world — are parties to all current significant global AI governance initiatives, according to the UN. World Economic Forum A full 118 member states have no meaningful seat at the table where the rules of AI are being written. This is not merely inequitable; it is dangerous. The technologies being deployed against human populations are outrunning the institutions designed to constrain them.
The Lethal Reality: AI Warfare and Human Safety in the Middle East
The Gaza conflict has provided the world its most documented and disturbing window into what AI warfare looks like when accountability is stripped away. Israel’s AI tools include the Gospel, which automatically reviews surveillance data to recommend bombing targets, and Lavender, an AI-powered database that listed tens of thousands of Palestinian men linked by algorithm to Hamas or Palestinian Islamic Jihad. Wikipedia Critics across the spectrum of international law have argued that the use of these systems blurs accountability and results in disproportionate violence in violation of international humanitarian law.
Evidence recorded in the classified Israeli military database in May 2025 revealed that only 17% of the 53,000 Palestinians killed in Gaza were combatants — implying that 83% were civilians. Action on Armed Violence That figure, if accurate, represents one of the highest civilian death rates in modern recorded warfare, and it emerges directly from the logic of algorithmic targeting: speed over deliberation, efficiency over ethics, statistical probability over the irreducible humanity of each individual life.
Many operators trusted Lavender so much that they approved its targets without checking them SETA — a collapse of human oversight so complete that it renders the phrase “human-in-the-loop” meaningless in practice. UN Secretary-General Guterres stated that he was “deeply troubled” by reports of AI use in Gaza, warning that the practice puts civilians at risk and fundamentally blurs accountability.
This is not an isolated case study. Contemporary conflicts — from Gaza, Sudan and Ukraine — have become “testing grounds” for the military use of new technologies. United Nations Slovenia’s President Nataša Pirc Musar, addressing the UN Security Council, put it with stark clarity: “Algorithms, armed drones and robots created by humans have no conscience. We cannot appeal to their mercy.”
The Accountability Void: Who Is Responsible When an Algorithm Kills?
The legal and moral vacuum at the center of AI warfare is not accidental — it is structural. Although autonomous weapons systems are making life-or-death decisions in conflicts without human intervention, no specific treaty regulates these new weapons. TRENDS Research & Advisory The foundational principles of international humanitarian law — distinction between combatants and civilians, proportionality, and precaution — were designed for human actors capable of judgment, hesitation, and moral reckoning. They were not designed for systems that process kill decisions in milliseconds.
Both international humanitarian law and international criminal law emphasize that serious violations must be punished to fulfil their purpose of deterrence. A “criminal responsibility gap” caused by AI would mean impunity for war crimes committed with the aid of advanced technology. Action on Armed Violence This is the nightmare scenario that legal scholars from Human Rights Watch to the International Committee of the Red Cross now warn about openly: not only that AI enables atrocities, but that it systematically destroys the chain of accountability that makes justice possible after them.
A 2019 Turkish Bayraktar drone strike in Libya created precisely this precedent: UN investigators could not determine whether the operator, manufacturer, or foreign advisors bore ultimate responsibility. TRENDS Research & Advisory That ambiguity, multiplied by the speed and scale of contemporary AI systems, represents an existential challenge to the international legal order.
The question “who is responsible when an algorithm kills?” cannot be answered under the current framework. And that is precisely why the current framework must be replaced.
The UN’s New Architecture: Promising, But Dangerously Insufficient
There are genuine signs that the international community understands what is at stake. The Global Dialogue on AI Governance will provide an inclusive platform within the United Nations for states and stakeholders to discuss the critical issues concerning AI facing humanity, with the Scientific Panel on AI serving as a bridge between cutting-edge AI research and policymaking — presenting annual reports at sessions in Geneva in July 2026 and New York in 2027. United Nations
The CCW Group of Experts’ rolling text from November 2024 outlines potential regulatory measures for lethal autonomous weapons systems, including ensuring they are predictable, reliable, and explainable; maintaining human oversight in morally significant decisions; restricting target types and operational scope; and enabling human operators to deactivate systems after activation. ASIL
Yet the gulf between these principles and enforceable reality remains vast. In November 2025, the UN General Assembly’s First Committee passed a historic resolution calling to negotiate a legally enforceable LAWS agreement by 2026 — 156 nations supported it overwhelmingly. Only five nations strictly rejected the resolution, notably the United States and Russia. Usanas Foundation Their resistance sends a signal that is impossible to misread: the two largest military AI developers on earth are actively resisting the international constraints that the rest of the world is demanding.
By the end of 2026, the Global Dialogue will likely have made AI governance global in form but geopolitical in substance — a first test of whether international cooperation can meaningfully shape the future of AI or merely coexist alongside competing national strategies. Atlantic Council That assessment, from the Atlantic Council’s January 2026 analysis, should be understood as a warning, not a prediction to be accepted passively.
The Case for an IAEA-Style UN AI Governance Body
The most compelling model for meaningful global AI regulation under the UN has been circulating in serious policy circles for several years, and in February 2026 it gained its most prominent corporate advocate. At the international AI Impact Summit 2026 in New Delhi, OpenAI CEO Sam Altman called for a radical new format for global regulation of artificial intelligence — modeled after the International Atomic Energy Agency — arguing that “democratizing AI is the only fair and safe way forward, because centralizing technology in one company or country can have disastrous consequences.” Logos-pres
The IAEA analogy is instructive precisely because it addresses the core failure of current approaches: the absence of verification, inspection, and enforcement. An IAEA-like agency for AI could develop industry-wide safety standards and monitor stakeholders to assess whether those standards are being met — similar to how the IAEA monitors the distribution and use of uranium, conducting inspections to help ensure that non-nuclear weapon states don’t develop nuclear weapons. Lawfare
This proposal has been echoed and refined by researchers published in Nature, who draw a direct parallel: the IAEA’s standardized safety standards-setting approach and emergency response system offer valuable lessons for establishing AI safety regulations, with standardized safety standards providing a fundamental framework to ensure the stability and transparency of AI systems. Nature
Skeptics argue, with some justification, that achieving this level of cooperation in the current geopolitical climate is extraordinarily difficult. But consider the alternative. The 2026 deadline is increasingly seen as the “finish line” for global diplomacy; if a treaty is not reached, the speed of innovation in military AI driven by the very powers currently blocking the UN’s progress will likely make any future regulation obsolete before the ink is even dry. Usanas Foundation We are, in the language of arms control analysts, in the “pre-proliferation window” — the last viable moment before these systems become as ubiquitous and ungovernable as small arms.
EU AI Act Enforcement and the Patchwork Problem
The European Union has moved further than any other jurisdiction toward binding regulation. By 2026, the EU AI Act is partially in force, with obligations for general-purpose AI and prohibited AI practices already applying, and high-risk AI systems facing requirements for pre-deployment assessments, extensive documentation, post-market monitoring, and incident reporting. OneTrust This is meaningful progress. It is also deeply insufficient as a global solution.
According to Gartner, by 2030, fragmented AI regulation will quadruple and extend to 75% of the world’s economies — but organizations that have deployed AI governance platforms are currently 3.4 times more likely to achieve high effectiveness in AI governance than those that do not. Gartner That statistic reveals both the potential of structured governance and the cost of its absence.
The EU’s rules, however rigorous, apply within EU member states and to companies seeking EU market access. They do not reach the drone manufacturers of Turkey, the autonomous targeting systems of Israel, the Replicator program of the United States Pentagon, or the algorithmic weapons being developed at pace in Beijing. The International AI Safety Report 2026 notes that reliable pre-deployment safety testing has become harder to conduct, and it has become more common for models to distinguish between test settings and real-world deployment — meaning dangerous capabilities could go undetected before deployment. Internationalaisafetyreport In a military context, undetected dangerous capabilities do not result in regulatory fines. They result in mass civilian casualties.
Comprehensive global AI regulation under the United Nations must transcend this patchwork. The model cannot be voluntary principles and national strategies stitched together by hope. It must be treaty-based, inspection-backed, and enforceable — with particular urgency around military applications.
The Policy Architecture the World Needs
The outline of what a viable global AI regulation UN platform would require is not, in fact, mysterious. The intellectual groundwork has been laid. What is missing is political will, specifically from the three states — the United States, Russia, and China — whose cooperation is structurally indispensable.
A credible architecture would include, at minimum:
- A binding treaty on lethal autonomous weapons systems, prohibiting systems that cannot be used in compliance with international humanitarian law and mandating meaningful human oversight for all others. The UN Secretary-General has maintained since 2018 that lethal autonomous weapons systems are politically unacceptable and morally repugnant, reiterating in his New Agenda for Peace the call to conclude a legally binding instrument by 2026. UNODA
- An Independent International AI Agency modeled on the IAEA, with authority to develop safety standards, conduct inspections of frontier AI systems, and verify compliance — particularly for dual-use applications with military potential.
- Universal inclusion of the Global South, whose populations bear a disproportionate share of the consequences of algorithmic warfare and AI-enabled surveillance, yet remain largely absent from the forums where the rules are being written. Many countries of the Global South are notably absent from the UN’s experts group on autonomous weapons, despite the inevitable future global impact of these systems once they become cheap and accessible. Arms Control Association
- A standing accountability mechanism for AI-related violations of international humanitarian law, closing the “responsibility gap” that currently allows commanders to deflect culpability onto algorithms.
- Real-time AI risk monitoring and reporting, with annual assessments presented to the UN General Assembly — building on the model of the Independent International Scientific Panel on AI already authorized for its first report in Geneva in July 2026.
None of this is technically impossible. The scientific consensus exists. The legal frameworks are available. The moral case is overwhelming.
Conclusion: Global AI Regulation UN 2026 — The Last Clear Moment
The Greek Prime Minister, speaking at the UN Security Council’s open debate on AI, made a comparison that deserves to reverberate through every foreign ministry and defense establishment on earth: the world must rise to govern AI “as it once did for nuclear weapons and peacekeeping.” He warned that “malign actors are racing ahead in developing military AI capabilities” and urged the Council to rise to the occasion. United Nations
Humanity’s fate, as the UN Secretary-General has said plainly, cannot be left to an algorithm. But neither can it be left to voluntary declarations, aspirational principles, and annual dialogues that produce no binding obligation. The deadly deployment of AI in active conflicts has already raised existential concerns for human safety that cannot be wished away by appeals to innovation or national security prerogative.
The architecture for a genuine global AI regulation UN platform exists in skeletal form. The Geneva Dialogue, the Scientific Panel, the LAWS treaty negotiations — these are the bones of something that could actually work. What they require now is not more deliberation. They require the political courage of the world’s most powerful states to subordinate short-term strategic advantage to the longer-term survival of the rules-based international order — and, more fundamentally, to the survival of human dignity in the age of the algorithm.
The pre-proliferation window is closing. 2026 is not a deadline to be managed. It is a moral threshold to be met.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
AI
OpenAI Robotics Chief Caitlin Kalinowski Quits Over Pentagon Deal: A Matter of Principle
On the morning of Saturday, March 8, 2026, Caitlin Kalinowski — one of the most accomplished hardware engineers in Silicon Valley and, until that day, OpenAI’s head of robotics — posted a resignation letter that read less like a grievance and more like a brief filed before history. “This wasn’t an easy call,” she wrote on X and LinkedIn. “AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” A second post was more surgical: “My issue is that the announcement was rushed without the guardrails defined. It’s a governance concern first and foremost.” A third, offered perhaps for those who suspected personal animosity toward colleagues or leadership, offered a quiet clarification: “This was about principle, not people.”
In the compressed, often performative world of tech resignations, these three statements were remarkable for what they were not: they were not vague, not self-promotional, and not hedged. The OpenAI Pentagon deal — announced roughly a week earlier amid the wreckage of Anthropic’s collapse from government favor — had acquired its most credible internal critic. The question, for investors, policymakers, and the millions who have handed their most intimate intellectual tasks to ChatGPT, is what happens next.
The Backdrop: Why Anthropic Said No and OpenAI Said Yes
To understand why Caitlin Kalinowski quit, you first need to understand why Anthropic effectively lost its seat at the table.
In late February 2026, the Trump administration moved to designate Anthropic as a “supply-chain risk” after the company refused to remove safety constraints from AI systems being evaluated for Pentagon deployment. The designation — extraordinary in its scope — effectively barred Anthropic from key federal procurement channels and sent a chill through the broader AI safety community. The Economist reported that Anthropic’s chief executive had offered a public apology for language critical of the Pentagon’s approach, while simultaneously filing suit to contest the supply-chain designation — a posture that satisfied no one cleanly but illustrated the profound bind facing any AI company that takes its own safety commitments seriously in a Washington now hungry for deployable capability.
OpenAI moved with speed. Within days of the Anthropic fallout becoming public, the company announced an agreement to deploy AI systems — including models built on the GPT-4 architecture — on classified Department of Defense networks. The deal, as presented, included a set of claimed “red lines”: no use for domestic surveillance of American citizens without judicial oversight, and no deployment in autonomous lethal decision-making without explicit human authorization. These commitments were described as contractually enforceable and backed by technical safeguards. Reuters confirmed the structure of the agreement on March 7, noting that OpenAI had made internal commitments about the scope of permitted use cases.
The problem, as Kalinowski’s exit would make clear, was not the destination — it was the journey, and whether sufficient architecture had been built along the way.
Kalinowski’s Stand: From Meta AR to OpenAI Robotics — A Line in the Sand
Caitlin Kalinowski was not a peripheral figure at OpenAI. She had been recruited in November 2024 from Meta, where she had served as the lead hardware engineer for Project Orion — Meta’s most ambitious augmented reality effort and, by most technical assessments, the most sophisticated AR device yet produced by a major tech company. Her hiring was seen as a signal that OpenAI was serious about the physical layer of AI: robots, sensors, embodied intelligence, hardware that could operate in the real world rather than the controlled environment of a data center.
For someone in that role, the Pentagon partnership was not abstract. Robotics and hardware sit precisely at the intersection where AI meets the physical domain — which is to say, precisely where the most consequential questions about lethal autonomy and surveillance hardware arise. Unlike a software engineer working on a language model far removed from physical deployment, Kalinowski’s domain was the place where the rubber, quite literally, meets the road.
TechCrunch’s detailed reconstruction of events suggests that internal deliberations about the Pentagon deal’s scope were truncated — that the timeline was driven by the political opportunity created by Anthropic’s exclusion rather than by a mature internal governance process. Whether that account is entirely accurate is difficult to verify from the outside. What is verifiable is that Sam Altman himself subsequently acknowledged the rollout had been “opportunistic and sloppy,” and that the company moved to amend its terms following the announcement — a remarkable concession that validated, at minimum, the procedural objection at the heart of Kalinowski’s departure.
That amended framework, as the Financial Times reported, attempted to more precisely delineate the scope of permissible military use and to establish clearer governance mechanisms. Critics — including some who did not share Kalinowski’s decision to resign — noted that the amendments came after, not before, the public announcement: a sequencing that undermined the credibility of the original process.
The Economic and Geopolitical Stakes
The Sam Altman Pentagon deal controversy arrives at a moment of extraordinary financial and strategic sensitivity for OpenAI. The company’s most recent private valuation exceeded $150 billion, a figure premised not simply on its current revenue but on a projected future in which OpenAI becomes foundational infrastructure for both the private economy and, increasingly, the national security apparatus. Defense-tech investment in the US has surged since 2022; the convergence of frontier AI capability with DoD contracting is now a central axis of Silicon Valley’s growth narrative.
The economics of the Pentagon deal, properly understood, are attractive. Government contracts offer revenue stability that consumer subscriptions do not; classified deployments command premium pricing; and a sustained DoD relationship confers a strategic moat against competitors — including international ones — that money alone cannot buy. Seen through that lens, the decision to pursue the partnership is commercially rational.
But the consumer dimension is where the math becomes more complicated. Fortune’s analysis noted that ChatGPT uninstalls in the US surged by 295% in the week following the Pentagon announcement — a figure that, if sustained even partially, represents a meaningful threat to the subscription revenue base that currently underpins OpenAI’s operating economics. Simultaneously, Claude — Anthropic’s flagship product — rose to the top two positions in the US App Store, a direct beneficiary of the perception, however imperfectly calibrated, that it represents a more principled alternative.
This dynamic illuminates a tension that will define AI’s next chapter: the revenue logic of government partnerships and the trust logic of consumer adoption do not always point in the same direction. OpenAI is now navigating both simultaneously, with the credibility cost of the governance misstep weighing on both.
Geopolitically, the stakes extend well beyond OpenAI’s balance sheet. The United States’ ability to project technological leadership — and to persuade democratic allies that American AI is the right foundation for their own defense and economic infrastructure — depends in part on the perception that US AI development operates within a comprehensible, principled framework. A high-profile resignation by a senior AI executive citing surveillance and lethal autonomy concerns is precisely the kind of signal that adversaries amplify and allies register with discomfort. Beijing’s AI governance narrative — that American AI is militarized, ungoverned, and therefore unsafe for partner nations — receives unintended reinforcement when the governance critiques come from inside the house.
The implications for the US-China AI competition are layered. China’s state-aligned AI development model faces its own credibility constraints with potential partners in the Global South and among non-aligned democracies. But every governance stumble on the American side narrows the differentiation. The OpenAI military AI deal ethics debate is, in this sense, not merely a domestic regulatory question — it is a soft-power variable in a competition that will run for decades.
The Governance Failure at the Center of It All
It is worth being precise about what Kalinowski did and did not say. She did not argue that AI has no role in national security — she said explicitly the opposite. She did not claim that the deal’s stated red lines were illegitimate. What she argued, with notable precision, was that the process was broken: that the guardrails had not been defined before the announcement was made, and that deliberation had been sacrificed to speed.
This is a governance critique, not an ideological one — and it is, arguably, the harder critique to dismiss. An ideological objection to military AI can be engaged with on policy grounds. A process objection, particularly when corroborated by the CEO’s own admission that the rollout was “sloppy,” points to institutional dysfunction of a different and more consequential kind.
The question it raises is structural: does OpenAI — or any frontier AI company operating at this scale and velocity — have governance mechanisms capable of handling the decisions now being placed before it? The company’s board was restructured in late 2023 following the brief and chaotic dismissal of Sam Altman; it has since been reconstituted with a stronger commercial orientation and reduced representation of the safety-first voices that originally dominated it. Whether that reconstituted board is equipped to deliberate with appropriate rigor on questions of OpenAI Kalinowski resignation surveillance, lethal autonomy, and classified military deployment is a question that regulators in Brussels, London, and Washington are now, quietly, asking.
The European Union’s AI Act, which entered its enforcement phase in 2025, contains explicit provisions on high-risk AI uses — provisions that may bear on the contractual structures OpenAI is now building with the DoD. UK regulators, operating under a principles-based framework rather than the EU’s rules-based approach, have been watching the American developments with a mixture of concern and, one suspects, a measure of competitive calculation. If US AI governance appears compromised, the argument for European regulatory leadership becomes stronger — and European AI champions benefit accordingly.
What Happens Next
Several trajectories are now in play simultaneously, and the interactions between them will shape not just OpenAI’s future but the broader architecture of AI governance.
Inside OpenAI, the Kalinowski resignation will accelerate an internal reckoning that was already underway. The company will face pressure — from remaining senior technical staff, from its investors, and from the amended Pentagon framework itself — to build genuine governance infrastructure rather than contractual scaffolding. Whether that means reinstating a more powerful safety function, establishing an independent oversight board with real authority over defense-related deployments, or something more novel remains to be seen. What is clear is that the talent-retention argument for getting this right is now materially stronger: engineers of Kalinowski’s caliber do not leave quietly, and her departure will be a reference point in every recruiting conversation the company has with senior hardware and robotics talent for the foreseeable future.
For the Pentagon, the episode underscores that procurement speed and governance adequacy are not the same thing. The DoD has a long and often uncomfortable history of deploying technologies — from predictive policing algorithms to drone targeting systems — before the ethical and legal frameworks have caught up. The [OpenAI Amended Pentagon Deal] represents an opportunity to establish a more rigorous template, but only if the amended terms carry genuine enforcement teeth rather than serving as public relations scaffolding.
For Anthropic, the short-term consumer gains are real but precarious. Rising to the top of the App Store on the strength of a competitor’s stumble is a brittle form of growth; sustaining that position will require Anthropic to demonstrate not just principled postures but capable products. The [Anthropic Supply-Chain Risk Ruling] also remains unresolved: the company’s legal challenge to its federal designation is pending, and its outcome will determine whether Anthropic can eventually re-enter the defense market on its own terms — or whether it becomes, by exclusion if not by choice, the AI company that the US government declined to include.
For global AI regulation, the episode has provided a concrete and high-profile case study that will inform legislative debates from Brussels to Tokyo. The argument that voluntary self-governance by frontier AI companies is adequate has been meaningfully weakened — not by an external critic but by the resignation of one of those companies’ own senior executives, citing the inadequacy of internal deliberation.
Caitlin Kalinowski’s three posts on the morning of March 8 were short. Their implications are not. In resigning over what she called a governance concern rather than a personal grievance, she has done something that critics and regulators have struggled to do from the outside: she has placed the question of how these decisions get made — not merely what decisions get made — at the center of the debate. In an industry where process is usually treated as a means to an end, that reframing may prove to be the most consequential thing she has done at OpenAI, and she did it on her way out the door.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
AI
Samsung’s AI Deals Target Apple’s Smartphone Lead
On a Tuesday evening in late February, a short post on Perplexity AI’s official changelog quietly announced the end of one era and the opening of another. The entry read: “Samsung’s Galaxy S26 is the first smartphone to integrate Perplexity’s APIs at the platform level. Bixby now uses Perplexity for real-time web search and advanced reasoning.” It ran to five bullet points. It was, by the understated conventions of developer documentation, one of the more consequential product announcements of 2026.
That integration — combined with the continued deep presence of Google Gemini across the Galaxy ecosystem and Samsung’s stated ambition to embed Galaxy AI into 800 million devices by December — crystallizes the strategic logic now driving the world’s largest smartphone maker. Samsung’s pursuit of Samsung AI deals is not a marketing exercise. It is a wholesale architectural bet: that the smartphone of the mid-2020s should function less like a single-vendor appliance and more like a fluid, open intelligence platform. The company that once trailed Apple on software coherence is now daring to redefine what smartphone software means.
“With 800 million Galaxy AI devices in its sights, a freshly inked partnership with Perplexity, and a multi-agent Galaxy S26 that hosts three AI engines simultaneously, Samsung is waging the most structurally ambitious challenge to Apple’s premium smartphone dominance in a decade — and betting that plurality, not purity, wins the intelligence era.“
The Scale Play: 800 Million and the Democratisation of AI
In January, Samsung’s new co-CEO T.M. Roh — who assumed the role in November 2025 — gave his first major press interview to Reuters, and he did not reach for nuance. “We will apply AI to all products, all functions, and all services as quickly as possible,” he said. The company had shipped Galaxy AI features to approximately 400 million mobile devices in 2025. The 2026 target is exactly double: 800 million smartphones, tablets, wearables, televisions and home appliances — a footprint that would, at a stroke, make Samsung the single largest distribution channel for consumer-facing generative AI anywhere on earth.
The internal evidence for this ambition is striking. Samsung’s own research shows that Galaxy AI brand awareness among its user base jumped from 30% to 80% in a single year — a pace of consumer adoption that, under normal conditions, takes half a decade. Among the features driving that recognition: real-time translation, generative image editing, voice transcription, and an overhauled search layer that surfaces results without requiring the user to open a browser. The raw numbers carry weight, but the direction matters more. AI is no longer a premium add-on on Samsung devices. It is being embedded as a default environmental layer, present in the background of everyday interactions whether the user invokes it explicitly or not.
Smartphone Market Snapshot — Q4 2025 / 2026 Forecast
| Metric | Figure | Source |
|---|---|---|
| Apple global market share, 2025 | 20% — #1 worldwide | Counterpoint Research |
| Apple iPhone units shipped, full-year 2025 | 247 million — a record | IDC |
| Expected global smartphone shipment change, 2026 | –12.9% | IDC, March 2026 revision |
| Projected 2026 smartphone market value | $579 billion — a record high | IDC |
| Samsung share of foldable market, Q3 2025 | ~66% | Counterpoint Research |
| Forecast average smartphone selling price, 2026 | $465 — up sharply on memory costs | IDC |
That context matters because 2026 is not a comfortable year in which to execute a volume ambition. IDC’s March 2026 market intelligence update revised the global shipment forecast to a decline of nearly 13% year-on-year — the steepest contraction in more than a decade, driven by what the firm’s vice president Francisco Jeronimo called “a tsunami-like shock originating in the memory supply chain.” The irony is acute: the same AI infrastructure buildout that Samsung is riding as a strategic tailwind is simultaneously squeezing memory supply, driving up component costs, and threatening to price mid-range Android devices out of reach for consumers in precisely the emerging markets where Samsung’s volume base is concentrated.
T.M. Roh acknowledged as much, telling Reuters that price increases were “inevitable” from the memory squeeze. Yet the long-term logic of the 800 million target may survive the short-term margin pain. Counterpoint Research’s Tarun Pathak noted that while the supply crunch would weigh on shipments, “Apple and Samsung are likely to remain resilient” given their supply-chain scale and premium-market exposure. In a contracting market, the strongest brands capture share. Samsung is making sure its brand is now, explicitly, an AI brand.
The Multi-Model Wager: Gemini, Perplexity, and the Open Ecosystem
The strategic heart of Samsung’s 2026 proposition arrived with the Galaxy S26, unveiled at Galaxy Unpacked on February 25. The device is the world’s first to run three independent, system-level AI agents simultaneously: Google Gemini, Samsung’s revamped Bixby, and now, via a partnership formally announced on February 21, Perplexity — accessible through the wake phrase “Hey Plex” or a long-press of the side button. Each agent has direct, OS-level permissions to interact with native Samsung applications including Notes, Calendar, Gallery, Clock and Reminders.
“Galaxy AI acts as an orchestrator, bringing together different forms of AI into a single, natural, cohesive experience.”
— Won-Joon Choi, President and COO, Samsung Mobile eXperience Business (Samsung Newsroom, February 2026)
The Perplexity integration is qualitatively different from a typical app pre-installation. As Dmitry Shevelenko, Perplexity’s Chief Business Officer, explained to Android Headlines, the Galaxy S26 marks the first time a non-Google entity has received OS-level access on a Samsung device — a structural concession Samsung would not have considered three years ago. Perplexity’s Sonar API now powers Bixby’s search backend; even users who never consciously interact with Perplexity are, in a sense, using it every time they ask Bixby a factual question that requires real-time web reasoning. Perplexity’s own changelog confirmed the integration shipped on February 27.
The philosophical departure from Silicon Valley orthodoxy is deliberate. Where Apple and Google construct closed, vertically integrated intelligence stacks — one vendor, one model, tightly controlled — Samsung is building what its COO describes as an “open and inclusive integrated AI ecosystem.” Its own internal research, cited at the Unpacked event, found that nearly eight in ten Galaxy users now rely on more than two types of AI agents. The multi-model strategy is, in this light, a direct reflection of observable consumer behaviour, not merely a technology preference. Whether it coheres as a seamless experience in practice remains the central execution question of 2026.
The technical foundation underpinning these ambitions is the Exynos 2600, built on Samsung’s 2nm gate-all-around process. Its neural processing unit reportedly runs on-device AI tasks more than twice as fast as its predecessor, enabling the “mixture of experts” model architecture that allows computationally heavy reasoning tasks to run locally without cloud latency. This matters for a specific class of user — in enterprise environments, in regions with unreliable connectivity, in cases where privacy-conscious consumers want their data to remain on-device. Samsung’s framing of its “Personal Data Engine” as a local, privacy-preserving learning layer is a direct response to Apple’s long-standing advantage on privacy messaging.
Apple’s Position: Market Leader, but AI Plays Catch-Up
Apple enters 2026 from a position of considerable market strength and uncomfortable strategic awkwardness. Counterpoint Research’s full-year 2025 data placed Apple as the world’s number-one smartphone vendor, with a 20% global share and the highest growth rate among the top five brands at 10% year-on-year. IDC similarly flagged a record 247 million units shipped, with Apple’s premium positioning insulating it from the mid-range pressures hammering Chinese Android manufacturers.
But in AI, the company that built its reputation on seamlessly integrated software finds itself, for the first time in a decade, in the awkward position of acknowledging that a partner can build better models than it can. On January 12, Apple and Google jointly announced a multi-year agreement worth a reported $1 billion annually, under which Google’s Gemini models and cloud infrastructure will power the next generation of Apple Foundation Models — the engine behind a long-delayed Siri overhaul. Apple had originally promised the revamped Siri for autumn 2024. Then spring 2025. Then late 2025. The partnership represents a candid, if corporate, admission that the internal timeline was broken.
As of early March, reports from Bloomberg and Mark Gurman suggest the Gemini-powered Siri features face further internal delays, with the most capable upgrade now expected in iOS 27 — potentially September 2026 at the earliest. Apple has told press the rollout remains on schedule for 2026, but the picture remains, as T3 described it, “slightly confusing.” In the meantime, Samsung has shipped three active AI agents on a flagship device and is expanding the feature set to older Galaxy models through software updates. The temporal gap between Samsung’s deployed capabilities and Apple’s promised ones is, at this moment, measurable in months at minimum.
There is also a notable structural paradox here. Samsung is both Apple’s fiercest smartphone competitor and, through its semiconductor division, one of Apple’s most critical supply-chain dependencies. Apple sources memory components — DRAM and NAND — from Samsung Semiconductor. The same global HBM shortage that is pressuring Samsung’s smartphone margins is simultaneously complicating Apple’s own component costs and forcing the company to delay the base iPhone model to early 2027, a scheduling shift IDC expects to pull iOS shipments down 4.2% next year. Both companies are, in this sense, victims of the same AI infrastructure gold rush — the insatiable demand for high-bandwidth memory from data centres crowding out the supply available for consumer devices.
The Korean Industrial Dimension
Analysts who track Samsung through a purely product-market lens often underestimate the degree to which its AI strategy is also a Korean industrial policy story. The shift toward on-device AI inference workloads — running models locally rather than routing queries to cloud servers — creates a “virtuous hardware loop,” as Samsung’s own briefing materials describe it: more on-device AI demands faster NPUs, which demands better memory, which directly benefits Samsung Semiconductor’s HBM4 ramp.
Samsung’s record profits of KRW 20.1 trillion (approximately $15 billion) in 2025 were powered as much by the chip division as by mobile, and the strategic logic connecting the two divisions is tightening. When Samsung ships an AI-intensive Galaxy S26 with Perplexity, Gemini and a local inference engine, it is simultaneously creating demand for the very memory products its semiconductor division makes. This vertical integration, rarely visible to the average consumer, is one of the more durable competitive advantages the company holds over Apple — which no longer manufactures memory — and over pure-play software companies entering the agentic AI era without a hardware base.
The Foldable Frontier and Wearables
Samsung’s AI ambitions extend beyond slab-form smartphones. The company controls roughly two-thirds of the global foldable market as of Q3 2025 and has three new foldable devices — including the Galaxy Z Fold 8, Galaxy Z Flip 8, and a reported third form factor — in carrier testing for a probable July or August 2026 launch. T.M. Roh told Reuters that while foldables have grown more slowly than anticipated, a “very high” repurchase rate within the category suggests deep user loyalty. He expects the segment to go mainstream within two to three years.
The integration of multi-agent Galaxy AI into foldables and wearables is where the platform logic becomes most compelling. A Galaxy Ring or Galaxy Watch user who already trusts Bixby for device control and Perplexity for research is a far stickier ecosystem participant than a consumer who merely uses a single AI feature on a flagship phone. IDC forecasts foldable market growth of 11% in 2027 even as the overall market contracts — the category’s resilience driven by exactly the AI-enhanced productivity use cases Samsung is now building.
Three Scenarios for the Smartphone AI Race
1. Samsung wins the volume war; Apple retains the value war
The most probable near-term outcome. Samsung’s 800 million AI device footprint makes it the dominant consumer AI distribution channel globally, while Apple’s delayed but eventually polished Gemini-Siri experience consolidates its premium lead. The smartphone market bifurcates into a Samsung-led mass-market AI layer and a smaller, higher-margin Apple intelligence tier.
2. The multi-model bet backfires
If the three-agent Galaxy S26 experience fails to cohere — if users find routing between “Hey Bixby,” “Hey Google,” and “Hey Plex” confusing rather than liberating — Samsung’s open-ecosystem pitch collapses into a cautionary tale about complexity. Apple’s eventual single, well-integrated Gemini-Siri upgrade becomes the benchmark against which Samsung’s plurality looks cluttered.
3. The memory crisis reshapes the competitive order
If the HBM shortage persists deep into 2027, smartphone ASPs rise sharply across the board. Chinese OEMs suffer most severely at the low end, Samsung loses volume in emerging markets, and Apple’s premium positioning and supply-chain relationships insulate it from the worst. The AI race becomes secondary to a supply-chain survival story.
The Deeper Competitive Question
There is a version of this story in which Samsung’s pursuit of AI partnerships is framed as a structural weakness — an acknowledgement that the company cannot build frontier models as effectively as Google, OpenAI or Anthropic, and must therefore license them. That framing misses the point. In the intelligence era, the scarcest resource is not the model — it is the hardware in hundreds of millions of consumers’ hands, the default integration that determines which AI a person uses without having to think about it.
Samsung has that hardware. What it has done in 2026, through the Gemini deepening, the Perplexity deal, and the Galaxy S26’s open multi-agent architecture, is monetise that hardware position by becoming indispensable to the AI companies that need consumer distribution. Perplexity, which launched only in 2022, has achieved through a single Samsung pre-install deal what would have required years of organic app-store growth. Google has secured default AI presence on Android devices at a scale that embarrasses any alternative model provider. Both companies are paying Samsung — in capability, in visibility, in strategic value — for access to the audience it has already built.
Apple, by contrast, is now in an unusual position: paying Google approximately $1 billion a year for AI capability on top of the billions it already pays Google for search placement, all while its own intelligence features run behind the delivery schedule its marketing department promised. The irony is not lost on analysts: the company most associated with vertical integration is now the one most exposed to a partner’s model development roadmap.
What the Samsung AI deals ultimately represent is a hypothesis about how the intelligence era will be won. Not through model supremacy alone, but through ecosystem breadth, hardware scale, and the willingness to let the best model for the moment — whatever it is, wherever it comes from — serve the user. Whether consumers validate that hypothesis, or whether they ultimately prefer the coherent simplicity of a single, trusted AI source, will determine the shape of the smartphone market for the remainder of this decade.
For now, Samsung has moved first, moved boldly, and moved at scale. The rest of the industry is watching the Galaxy S26 — three AIs, one device, an open ecosystem — to see if the future it promises is one consumers actually want.
Sources & References
- Reuters — “Samsung to Double AI Mobile Devices to 800 Million Units,” Jan. 5, 2026
- Samsung Newsroom — “Galaxy AI Expands Multi-Agent Ecosystem,” Feb. 20, 2026
- Perplexity AI Changelog — Galaxy S26 Integration, Feb. 27, 2026
- CNBC — “Apple Picks Google’s Gemini to Power AI-Powered Siri,” Jan. 12, 2026
- Google/Apple Joint Statement, Jan. 12, 2026
- IDC Worldwide Quarterly Mobile Phone Tracker — March 2026 Revision
- Counterpoint Research — Global Smartphone Market Share, Full-Year 2025
- Android Headlines — “Galaxy S26’s Perplexity AI Integration is Deeper Than You Think,” Feb. 2026
- TechCrunch — “Google’s Gemini to Power Apple’s AI Features Like Siri,” Jan. 12, 2026
- T3 — “Gemini-Powered Siri Still on Track for 2026,” Feb./Mar. 2026
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
-
Markets & Finance2 months agoTop 15 Stocks for Investment in 2026 in PSX: Your Complete Guide to Pakistan’s Best Investment Opportunities
-
Analysis1 month agoBrazil’s Rare Earth Race: US, EU, and China Compete for Critical Minerals as Tensions Rise
-
Banks2 months agoBest Investments in Pakistan 2026: Top 10 Low-Price Shares and Long-Term Picks for the PSX
-
Investment2 months agoTop 10 Mutual Fund Managers in Pakistan for Investment in 2026: A Comprehensive Guide for Optimal Returns
-
Asia2 months agoChina’s 50% Domestic Equipment Rule: The Semiconductor Mandate Reshaping Global Tech
-
Analysis4 weeks agoTop 10 Stocks for Investment in PSX for Quick Returns in 2026
-
Global Economy3 months agoPakistan’s Export Goldmine: 10 Game-Changing Markets Where Pakistani Businesses Are Winning Big in 2025
-
Global Economy3 months ago15 Most Lucrative Sectors for Investment in Pakistan: A 2025 Data-Driven Analysis
