Connect with us

AI

The Price of Algorithmic War: How AI Became the New Dynamite in the Middle East

Published

on

The Iran conflict has turned frontier AI models into contested weapons of state — and the financial and human fallout is only beginning to register.

In the first eleven days of the U.S.-Israeli offensive against Iran, which began on February 28, 2026, American and Israeli forces executed roughly 5,500 strikes on Iranian targets. That is an operational tempo that would have required months in any previous conflict — made possible, in significant part, by artificial intelligence. In the first eleven days of the conflict, America achieved an astonishing 5,500 strikes, using AI on a large-scale battlefield for the first time at this scale. The National The same week those bombs fell, a legal and commercial crisis erupted in Silicon Valley with consequences that will define the AI industry for years. Both events are part of the same story.

We are living through the moment when AI ceased being a future-war thought experiment and became an operational reality — embedded in targeting pipelines, shaping intelligence assessments, and now at the center of a constitutional showdown between a frontier AI company and the United States government. Alfred Nobel, who invented dynamite and then spent the remainder of his life in tortured ambivalence about it, would have recognized the pattern immediately.

The Kill Chain, Accelerated

The joint U.S. and Israeli offensive on Iran revealed how algorithm-based targeting and data-driven intelligence are reforming the mechanics of warfare. In the first twelve hours alone, U.S. and Israeli forces reportedly carried out nearly 900 strikes on Iranian targets — an operational tempo that would have taken days or even weeks in earlier conflicts. Interesting Engineering

At the technological center of this acceleration sits a system most Americans have never heard of: Project Maven. Anthropic’s Claude has become a crucial component of Palantir’s Maven intelligence analysis program, which was also used in the U.S. operation to capture Venezuelan President Nicolás Maduro. Claude is used to help military analysts sort through intelligence and does not directly provide targeting advice, according to a person with knowledge of Anthropic’s work with the Defense Department. NBC News This is a distinction with genuine moral weight — between decision-support and decision-making — but one that is becoming harder to sustain at the speed at which modern targeting now operates.

Critics warn that this trend could compress decision timelines to levels where human judgment is marginalized, ushering in an era of warfare conducted at what has been described as “faster than the speed of thought.” This shortening interval raises fears that human experts may end up merely approving recommendations generated by algorithms. In an environment dictated by speed and automation, the space for hesitation, dissent, or moral restraint may be shrinking just as quickly. Interesting Engineering

The U.S. military’s posture has been notably sanguine about these concerns. Admiral Brad Cooper, head of U.S. Central Command, confirmed that AI is helping soldiers process troves of data, stressing that humans make final targeting decisions — but critics note the gap between that principle and verifiable practice remains wide. Al Jazeera

The Financial Architecture of AI Warfare

The economic dimensions of this transformation are substantial and largely unreported in their full complexity. Understanding them requires holding three separate financial narratives simultaneously.

The direct contract market is the most visible layer. Over the past year, the U.S. Department of Defense signed agreements worth up to $200 million each with several major AI companies, including Anthropic, OpenAI, and Google. CNBC These are not trivial sums in isolation, but they represent the seed capital of a much larger transformation. The military AI market is projected to reach $28.67 billion by 2030, as the speed of military decision-making begins to surpass human cognitive capacity. Emirates 24|7

The collateral economic disruption is less discussed but potentially far larger. On March 1, Iranian drone strikes took out three Amazon Web Services facilities in the Middle East — two in the UAE and one in Bahrain — in what appear to be the first publicly confirmed military attacks on a hyperscale cloud provider. The strikes devastated cloud availability across the region, affecting banks, online payment platforms, and ride-hailing services, with some effects felt by AWS users worldwide. The Motley Fool The IRGC cited the data centers’ support for U.S. military and intelligence networks as justification. This represents a strategic escalation that no risk-management framework in the technology sector adequately anticipated: cloud infrastructure as a legitimate military target.

The reputational and legal costs of AI’s battlefield role may ultimately dwarf both. Anthropic’s court filings stated that the Pentagon’s supply-chain designation could cut the company’s 2026 revenue by several billion dollars and harm its reputation with enterprise clients. A single partner with a multi-million-dollar contract has already switched from Claude to a competing system, eliminating a potential revenue pipeline worth more than $100 million. Negotiations with financial institutions worth approximately $180 million combined have also been disrupted. Itp

The Anthropic-Pentagon Fracture: A Defining Test

The dispute between Anthropic and the U.S. Department of Defense is not merely a contract negotiation gone wrong. It is the first high-profile case in which a frontier AI company drew a public ethical line — and then watched the government attempt to destroy it for doing so.

The sequence of events is now well-documented. The administration’s decisions capped an acrimonious dispute over whether Anthropic could prohibit its tools from being used in mass surveillance of American citizens or to power autonomous weapon systems, as part of a military contract worth up to $200 million. Anthropic said it had tried in good faith to reach an agreement, making clear it supported all lawful uses of AI for national security aside from two narrow exceptions. NPR

When Anthropic held its position, the response was unprecedented in the annals of U.S. technology policy. Defense Secretary Pete Hegseth declared Anthropic a supply chain risk in a statement so broad that it can only be seen as a power play aimed at destroying the company. Shortly thereafter, OpenAI announced it had reached its own deal with the Pentagon, claiming it had secured all the safety terms that Anthropic sought, plus additional guardrails. Council on Foreign Relations

In an extraordinary move, the Pentagon designated Anthropic a supply chain risk — a label historically only applied to foreign adversaries. The designation would require defense vendors and contractors to certify that they don’t use the company’s models in their work with the Pentagon. CNBC That this was applied to a U.S.-headquartered company, founded by former employees of a U.S. nonprofit, and valued at $380 billion, represents a remarkable inversion of the logic the designation was designed to serve.

Meanwhile, Washington was attacking an American frontier AI leader while Chinese labs were on a tear. In the past month alone, five major Chinese models dropped: Alibaba’s Qwen 3.5, Zhipu AI’s GLM-5, MiniMax’s M2.5, ByteDance’s Doubao 2.0, and Moonshot’s Kimi K2.5. Council on Foreign Relations The geopolitical irony is not subtle: in punishing a safety-focused American AI company, the administration may have handed Beijing its most useful competitive gift of the year.

The Human Cost: Social Ramifications No Algorithm Can Compute

Against the financial ledger, the humanitarian accounting is staggering and still incomplete.

The Iranian Red Crescent Society reported that the U.S.-Israeli bombardment campaign damaged nearly 20,000 civilian buildings and 77 healthcare facilities. Strikes also hit oil depots, several street markets, sports venues, schools, and a water desalination plant, according to Iranian officials. Al Jazeera

The case that has attracted the most scrutiny is the bombing of the Shajareh Tayyebeh elementary school in Minab, southern Iran. A strike on the school in the early hours of February 28 killed more than 170 people, most of them children. More than 120 Democratic members of Congress wrote to Defense Secretary Hegseth demanding answers, citing preliminary findings that outdated intelligence may have been to blame for selecting the target. NBC News

The potential connection to AI decision-support systems is explored with forensic precision by experts at the Bulletin of the Atomic Scientists. One analysis notes that the mistargeting could have stemmed from an AI system with access to old intelligence — satellite data that predated the conversion of an IRGC compound into an active school — and that such temporal reasoning failures are a known weakness of large language models. Even with humans nominally “in the loop,” people frequently defer to algorithmic outputs without careful independent examination. Bulletin of the Atomic Scientists

The social fallout extends well beyond individual atrocities. Israel’s Lavender AI-powered database, used to analyze surveillance data and identify potential targets in Gaza, was wrong at least 10 percent of the time, resulting in thousands of civilian casualties. A recent study found that AI models from OpenAI, Anthropic, and Google opted to use nuclear weapons in simulated war games in 95 percent of cases. Rest of World The simulation result does not predict real-world behavior, but it reveals how strategic reasoning models can default toward extreme outcomes under pressure — a finding that ought to unsettle anyone who imagines that algorithmic warfare is inherently more precise than the human kind.

The corrosion of accountability is perhaps the most insidious long-term social effect. “There is no evidence that AI lowers civilian deaths or wrongful targeting decisions — and it may be that the opposite is true,” says Craig Jones, a political geographer at Newcastle University who researches military targeting. Nature Yet the speed and opacity of AI-assisted operations makes it exponentially harder to assign responsibility when things go wrong. Algorithms do not face courts-martial.

Governance: The International Gap

Rapid technological development is outpacing slow international discussions. Academics and legal experts meeting in Geneva in March 2026 to discuss lethal autonomous weapons systems found themselves studying a technology already being used at scale in active conflicts. Nature The gap between the pace of deployment and the pace of governance has never been wider.

The Middle East and North Africa are arguably the most conflict-ridden and militarized regions in the world, with four out of eleven “extreme conflicts” identified in 2024 by the Armed Conflict Location and Event Data organization occurring there. The region has become a testing ground for AI warfare whose lessons — and whose errors — will shape every future conflict. War on the Rocks

The legal framework governing AI in warfare remains, generously described, aspirational. The U.S. military’s stated commitment to keeping “humans in the loop” is a principle that has no internationally binding enforcement mechanism, no agreed definition of what meaningful human control actually entails, and no independent auditing process. One expert observed that the biggest danger with AI is when humans treat it as an all-purpose solution rather than something that can speed up specific processes — and that this habit of over-reliance is particularly lethal in a military context. The National

AI as the New Dynamite: Nobel’s Unresolved Legacy

When Alfred Nobel invented dynamite in 1867, he believed — genuinely — that a weapon so devastatingly efficient would make war unthinkably costly and therefore rare. He was catastrophically wrong. The Franco-Prussian War, the First World War, and the entire industrial-era atrocity that followed proved that more powerful weapons do not deter wars; they escalate them, and they increase civilian mortality relative to combatant casualties.

The parallel to AI is not decorative. The argument for AI in warfare — that algorithmic precision reduces collateral damage, that faster targeting shortens conflicts, that autonomous systems absorb military risk that would otherwise fall on human soldiers — is structurally identical to Nobel’s argument for dynamite. It is the rationalization of a dual-use technology by those with an interest in its proliferation.

Drone technology in the Middle East has already shifted from manual control toward full autonomy, with “kamikaze” drones utilizing computer vision to strike targets independently if communications are severed. As AI becomes more integrated into militaries, the advancements will become even more pronounced with “unpredictable, risky, and lethal consequences,” according to Steve Feldstein, a senior fellow at the Carnegie Endowment for International Peace. Rest of World

The Anthropic dispute, whatever its ultimate legal resolution, has surfaced a question that Silicon Valley has been able to defer until now: can a technology company that builds frontier AI models — systems capable of synthesizing intelligence, generating targeting assessments, and running strategic simulations — genuinely control how those systems are used once deployed by a state? As OpenAI’s own FAQ acknowledged when asked what would happen if the government violated its contract terms: “As with any contract, we could terminate it.” The entire edifice of AI safety in warfare, for now, rests on the contractual leverage of companies that have already agreed to participate. Council on Foreign Relations

Nobel at least had the decency to endow prizes. The AI industry is still working out what it owes.

Policy Recommendations

A minimally adequate governance framework for AI in warfare would need to accomplish several things. Independent verification of “human in the loop” claims — not merely the assertion of it — is the essential starting point. Mandatory after-action reporting on AI involvement in any strike that results in civilian casualties would create accountability where none currently exists. International agreement on a baseline error-rate threshold — above which AI targeting systems may not be used without additional human review — would translate abstract humanitarian law into operational reality.

The technology companies themselves bear responsibility that no contract clause can fully discharge. Researchers from OpenAI, Google DeepMind, and other labs submitted a court filing supporting Anthropic’s position, arguing that restrictions on domestic surveillance and autonomous weapons are reasonable until stronger legal safeguards are established. ColombiaOne That the most capable AI builders in the world believe their own technology is not yet reliable enough for autonomous lethal use is information that should be at the center of every policy debate — not buried in court filings.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

AI

Google’s AI Supremacy Bet: Outpacing Rivals Amid Big Tech’s $725 Billion Spending Surge and the Pentagon Contract Backlash

Published

on

The search giant is pulling ahead in the hyperscaler arms race—but at what cost to its soul, its workforce, and its original promise?

There is a scene playing out across Silicon Valley that would have seemed like science fiction a decade ago: the world’s most profitable technology companies are engaged in a collective capital expenditure supercycle of almost incomprehensible scale, committing a combined sum approaching $725 billion to AI infrastructure in 2026 alone. Data centers are rising from deserts. Undersea cables are being rerouted. Nuclear reactors are being negotiated. And at the center of this frenzy—not just participating, but quietly pulling ahead—is Google.

Alphabet’s recent quarterly results told a story that Wall Street had not quite expected with such clarity. Google Cloud grew 63% year-on-year to reach $20 billion in a single quarter, with its backlog expanding at a pace that suggests enterprise AI monetization is no longer a projection slide—it is a revenue line. Against a backdrop in which Meta’s stock briefly wobbled on disclosure of accelerated capex plans, and Microsoft faced pointed questions about the pace of Azure AI conversion, Google emerged as the rare hyperscaler that investors seemed to trust with its own checkbook. That is a meaningful distinction in a market increasingly skeptical of AI’s near-term return on investment.

Yet the Google story in 2026 is not merely a financial one. It is, simultaneously, an ethical drama, a geopolitical chess move, and a management test of the highest order. The company’s decision to extend its Gemini AI models to Pentagon classified workloads—permitting their use for “any lawful government purpose”—has triggered the kind of internal revolt that Sundar Pichai has navigated before, but perhaps never quite like this. More than 600 employees signed an open letter to the CEO expressing what they described as shame, ethical alarm, and deep concern over the potential for their work to be directed toward surveillance systems, autonomous weapons targeting, or other military applications they never signed up to build.

Welcome to Google in the age of AI supremacy.

The $725 Billion Capex Supercycle: What the Numbers Actually Mean

To understand Google’s position, one must first absorb the full weight of what the hyperscaler investment surge represents. The aggregate capital expenditure guidance across Alphabet, Meta, Amazon Web Services, and Microsoft for 2026 now approaches—and by some analyst compilations, exceeds—$725 billion. Alphabet alone has guided toward $180–190 billion in infrastructure investment for the year. Amazon has signaled approximately $200 billion. Meta, despite the investor nervousness its updated capex guidance provoked, is tracking toward $125–145 billion. Microsoft, which has somewhat pulled back from the most aggressive single-year targets of prior guidance cycles, remains elevated by any historical standard.

These are not numbers that fit comfortably inside traditional return-on-investment frameworks. To put them in perspective: the combined GDP of Pakistan, Egypt, and Chile is roughly equivalent to what the four largest American technology companies plan to spend building AI infrastructure in a single calendar year. The International Monetary Fund would classify this as a capital formation event of macroeconomic consequence—not a corporate earnings footnote.

The money is flowing into several interconnected categories: GPU procurement (Nvidia’s order books are reportedly filled years into the future), data center construction across North America, Europe, and Southeast Asia, power infrastructure and grid connections, and increasingly, investments in alternative energy sources. Google itself has signed agreements with nuclear energy developers to power data centers with small modular reactors—a technology that, three years ago, would have been considered speculative engineering rather than near-term procurement strategy.

What distinguishes Google’s investment posture from its peers is not simply the quantum of spending, but the evidence that it is beginning to pay off in observable, auditable revenue. The 63% year-on-year growth in Google Cloud—achieved not in a base period of suppressed demand but against already elevated post-pandemic comparisons—suggests that enterprise customers are not merely piloting Gemini-powered tools. They are deploying them at scale and paying for the privilege. The expanding backlog is perhaps the more significant metric: it implies committed future revenue, reducing the speculative character of Alphabet’s infrastructure build and lending credibility to the argument that the company has struck a monetization rhythm its rivals have not yet matched.

Google Cloud vs. the Field: Where the AI Revenue Race Stands

Cloud Growth Rates Tell a Revealing Story

For investors parsing the competitive landscape of AI infrastructure monetization, the cloud revenue trajectories are the most consequential data series to watch. Google Cloud’s 63% YoY growth comfortably outpaces the growth rates posted by Azure and AWS in the same period, though it is worth noting that Google Cloud is working from a smaller absolute base—a structural advantage that tends to inflate percentage growth in ways that can flatter.

What is harder to dismiss is the qualitative character of that growth. Alphabet’s management has been unusually specific about the sources of Cloud acceleration: AI-native workloads, Gemini API consumption, and—critically—enterprise deals that bundle infrastructure with model access and deployment support. This is not commodity cloud compute growing on price. It is differentiated AI services growing on capability, which carries both higher margins and more durable competitive moats.

Meta’s situation offers an instructive contrast. When CFO Susan Li disclosed the upward revision in Meta’s capex guidance earlier this year, the market’s reaction was immediate and sharp: shares fell several percent intraday on concerns that the spending was outpacing visible monetization pathways. The investor community’s message was clear—AI infrastructure investment is not inherently valued; AI infrastructure investment with a credible revenue story is. Google, for now, has that story. Meta is still largely telling one.

Microsoft presents a more nuanced picture. The Azure AI growth story remains compelling on its own terms, powered by the OpenAI partnership and a deeply embedded enterprise customer base that is actively integrating Copilot across productivity software. But Microsoft has also faced questions about whether its OpenAI exposure—an investment structure that comes with revenue-sharing obligations and significant compute cost transfers—creates a ceiling on margin expansion that purely proprietary model developers like Google do not face. The answer is not yet definitive, but it is a structural question that Alphabet’s architecture avoids.

The Pentagon Deal: Strategic Maturity or Moral Compromise?

Google’s Gemini and the New Defense-AI Nexus

The decision to authorize Gemini models for Pentagon classified workloads did not emerge in a vacuum. It followed a pattern now visible across the industry: OpenAI secured its own classified government contracts; Elon Musk’s xAI has been in conversations with U.S. defense and intelligence agencies; and even Anthropic—often positioned as the safety-first alternative in the AI landscape—has navigated the tension between its constitutional AI principles and government partnership demands with less public grace than its branding might suggest.

For Google, the context is particularly charged. The company famously did not renew its Project Maven contract with the Pentagon in 2018 after employee protests forced a retreat that became a case study in how internal dissent could redirect corporate strategy. That withdrawal was framed at the time as a principled stand. Eight years later, the company has effectively reversed course—not in secret, but through a contract clause that explicitly permits Gemini’s use for “any lawful government purpose,” a formulation broad enough to encompass intelligence analysis, targeting support systems, and surveillance infrastructure.

The 600-plus employees who signed the open letter to Pichai were not naive. They understood, as Google’s leadership understands, that “lawful” is a word that carries different weights in peacetime and in active conflict. Their letter expressed shame—a particularly pointed word, implying that the company’s actions reflect on those who build its products in ways they did not consent to. They raised specific concerns about autonomous weapons systems, the potential for AI-assisted targeting to remove human judgment from lethal decisions, and the use of surveillance tools against civilian populations.

These are not hypothetical concerns. The use of AI systems in conflict zones—from drone targeting assistance to signals intelligence processing—is already a documented reality across several active theaters. The employees signing that letter had read the same reports as everyone else.

The Geopolitical Imperative Google Cannot Ignore

And yet. The case for Google’s decision, when made honestly and without sanitizing language, is both harder and more important to engage with than its critics typically allow.

The United States is engaged in a technological competition with China that has no clean civilian-military boundary. The People’s Liberation Army and China’s leading AI laboratories—many of which receive state funding and operate under laws requiring cooperation with national intelligence agencies—are not separating their research programs into “acceptable” and “unacceptable” domains. Huawei, Baidu, Alibaba, and a constellation of less visible firms are building AI capabilities that will be available to Chinese defense planners whether American technology companies participate in U.S. defense programs or not.

The choice, in other words, is not between a world where AI is and is not integrated into military systems. It is a choice about which country’s AI systems—and which country’s values, however imperfectly encoded—predominate in those applications. That is a different argument, and one that many of Google’s protesting employees would engage with more seriously than the binary “we should not do this” framing that open letters tend to collapse into.

Sundar Pichai has been careful not to make this argument too loudly, because doing so would effectively confirm every worst-case interpretation of what the Pentagon contract enables. But it is the unstated logic beneath the decision, and it tracks with a broader shift in how Silicon Valley’s leadership class has recalibrated its relationship with Washington under the pressure of geopolitical competition.

The “Don’t Be Evil” Reckoning: Silicon Valley’s Original Sin Returns

Talent, Culture, and the Ethics of Scale

Google’s internal ethics have always been a managed tension rather than a resolved principle. The “don’t be evil” motto—quietly retired from the corporate code of conduct years ago—was always more aspiration than constraint. The company that refused Pentagon contracts in 2018 was also the company whose advertising systems created surveillance capitalism as a viable business model. The company whose employees are now expressing shame over military AI is also the company that built tools used for targeted political advertising, data brokerage ecosystems, and content moderation systems whose biases remain poorly understood.

This is not to dismiss the sincerity of the protesting employees—many of whom are taking genuine professional risk by signing public letters critical of their employer. It is to suggest that the ethical terrain of building AI at Google’s scale has never been clean, and that the Pentagon contract represents a threshold crossing that is visible and legible in ways that other ethically complex decisions are not.

The talent implications are real and should not be underestimated. Google competes for a narrow pool of exceptional AI researchers and engineers who have, in many cases, genuine ideological commitments about how their work should be used. If the company’s defense posture drives significant attrition among its most senior technical staff—particularly those in safety, alignment, and model evaluation roles—the reputational and capability costs could compound in ways that quarterly cloud revenue figures would not immediately reveal.

There is also a recruitment dimension. The most coveted AI talent at the PhD and postdoctoral level increasingly includes researchers with explicit views about AI safety and dual-use concerns. Several leading AI safety researchers have, over the past two years, declined offers from companies they perceived as insufficiently rigorous about military and surveillance applications. Whether Google’s defense pivot costs it meaningful talent acquisition capability is a question that will only be legible in retrospect—but it is not a trivial one.


The Macroeconomics of the AI Infrastructure Boom: ROI, Risk, and Reckoning

Is This a Supercycle or a Superbubble?

The $725 billion capex figure demands an honest engagement with the question that haunts every capital investment supercycle: what is the realistic return, and over what timeline?

The optimistic case—articulated by Alphabet’s management, embraced by a significant portion of the investment community, and supported by Google Cloud’s current trajectory—holds that AI is a foundational infrastructure shift comparable to the build-out of the internet itself. On this view, the companies that secure early dominance in AI compute, model capability, and enterprise deployment will enjoy compounding advantages that justify present investment at almost any near-term cost.

The skeptical case notes that the internet build-out of the late 1990s also featured extraordinary capital commitment, confident narratives about foundational transformation, and a subsequent reckoning that erased trillions in market value before the genuinely transformative value was realized. The parallel is not exact—there is considerably more real revenue being generated by AI services today than existed in the dot-com era—but it is not comforting.

The energy demand implications of this infrastructure build are particularly worth lingering on. AI data centers are extraordinarily power-intensive. The aggregate electricity demand implied by the planned hyperscaler build-out in 2026 is estimated to rival the annual electricity consumption of several medium-sized European countries. This is creating bottlenecks that cannot be resolved through procurement alone: grid infrastructure investment, permitting timelines, and the physics of power generation impose hard constraints that no amount of capital can immediately overcome. Google’s nuclear energy agreements are partly a reflection of this reality—the company is trying to secure power supply years ahead of need because the alternative is having stranded compute assets.

The data center construction boom is also reshaping regional economies in ways that create both opportunity and friction. Communities in Virginia, Texas, Iowa, and increasingly in European jurisdictions are navigating the dual reality of significant tax base expansion and serious pressure on water resources, local grid stability, and community infrastructure from facilities that employ relatively few people per square foot of construction.

Google’s Structural Advantages: Why It May Be the Best-Positioned Hyperscaler

Proprietary Models, Vertical Integration, and the Search Moat

Of the four major hyperscalers competing in the AI infrastructure race, Google enters 2026 with a structural profile that is, on balance, the most defensible. This is not a conclusion that was obvious two years ago, when the GPT-4 moment appeared to catch Google flat-footed and when early Bard launches drew unfavorable comparisons that damaged the company’s AI credibility.

The situation has materially changed. Gemini 2.0 and its successors represent genuinely competitive frontier models. Google’s TPU infrastructure—custom silicon designed specifically for AI workload optimization—provides a cost-efficiency advantage at scale that Nvidia-dependent rivals cannot easily replicate. The integration of Gemini across Google’s existing product surface area (Search, Workspace, YouTube, Android) provides a distribution moat for AI capabilities that no other company can match in sheer reach.

The Search integration is particularly underappreciated. Google processes more than 8.5 billion queries per day. The ability to deploy AI-enhanced search responses, AI-assisted advertising targeting, and AI-powered content generation tools across that volume at near-zero marginal cost—because the infrastructure is already built and amortized—creates an economic leverage point that pure-play cloud competitors cannot access.

Microsoft’s Copilot integration into Office is the closest analog, but Microsoft’s enterprise installed base, while large, is not consumer-scale in the same way. The potential for Google to monetize AI capabilities across its consumer surface while simultaneously building cloud enterprise revenue creates a dual-engine revenue structure that is uniquely robust.

Looking Forward: The Questions That Will Define the Next Decade

The Google of 2026 is a company that has made its bets and is beginning to collect on some of them. The cloud revenue trajectory, the model capability improvements, the defense sector expansion, and the infrastructure investment all reflect a leadership team that has absorbed the lessons of the post-ChatGPT moment and responded with strategic discipline rather than reactive flailing.

But the questions that will define whether Google’s AI supremacy is durable or temporary are not primarily technical. They are political, ethical, and economic.

Can Google retain the talent it needs? The employee letter is a warning signal, not merely a PR nuisance. If the company’s defense pivot accelerates a drift of safety-conscious AI researchers toward academic institutions, non-profits, or rival companies with different postures, the long-term model quality implications are non-trivial.

Will AI capex ROI materialize at the pace implied by current valuations? The Google Cloud growth story is real, but the multiple at which Alphabet trades assumes that the current growth rate is sustainable and that AI spending will convert into margin expansion rather than permanent cost elevation. That is a forecast, not a fact.

How will the geopolitical landscape shape the competitive environment? If U.S.-China technology decoupling accelerates, Google’s exclusion from the Chinese market—already a reality—limits its addressable market in ways that Chinese AI companies, operating in a protected domestic environment, do not face in reverse. The Pentagon partnership may open U.S. government revenue doors, but it also accelerates the fragmentation of the global technology landscape in ways that could, over time, constrain Google’s international growth.

What is the social contract for AI infrastructure? The energy, water, and land demands of the AI infrastructure build are becoming subjects of serious regulatory and community scrutiny. The companies that navigate those relationships with genuine stakeholder engagement will build social licenses that prove valuable; those that treat them as obstacles to be managed will accumulate political liabilities that eventually impose costs.

Google’s AI supremacy bet is, ultimately, a wager on the company’s capacity to be simultaneously the most capable, the most commercially successful, the most trusted, and the most strategically sophisticated actor in a field that is reshaping every dimension of economic and political life. That is an ambitious combination. The cloud revenue numbers suggest it is not an impossible one.

Whether the employees signing letters of shame, the communities negotiating data center impacts, and the governments writing AI governance frameworks will allow Google the space to prove it—that is the open question that no earnings transcript can answer.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

Analysis

Emerging Market Stocks Hit Record High as Asian Chipmakers Surge: The AI-Driven Reordering of Global Capital

Published

on

There is a number that has quietly upended a decade of received wisdom about where global capital belongs. On April 28, 2026, South Korea’s combined equity market capitalization crossed $4 trillion — surpassing the United Kingdom to rank eighth in the world. Korea overtook the UK — with a market cap of about $3.99 trillion — to rank eighth worldwide, behind the US, China, Japan, Hong Kong, India, Canada, and Taiwan. Taiwan had beaten them to it. The total market value of Taiwan-listed stocks had already reached $4.14 trillion, edging past the UK’s $4.09 trillion. Two Asian chip-powered economies, once casually bracketed under the patronizing rubric of “emerging,” now dwarf France, Germany, and the financial colossus of the City of London by equity market size. The Korea HeraldTaiwan News

This is not an anecdote. It is an epoch.

The surge in emerging market stocks to fresh record highs in 2026 is being powered, in ways that most Western investors have been agonizingly slow to appreciate, by a fundamental structural shift: the semiconductor supply chain — the physical backbone of the artificial intelligence revolution — is concentrated overwhelmingly in East Asia. TSMC, Samsung Electronics, and SK Hynix are not beneficiaries of a cyclical trade; they are the indispensable infrastructure of the twenty-first-century economy. The MSCI Emerging Markets Index hitting record highs this year is not a fluke. It is the market’s belated acknowledgment of a reality that analysts in Seoul and Taipei have understood for years.


The Numbers Behind the Surge

The MSCI Emerging Markets Index has surged 16% since the beginning of 2026, outpacing the S&P 500, which has climbed only about 5% over the same period. The index’s robust performance has been consistent for five consecutive quarters, and analysts have revised profit forecasts for emerging market companies upward by approximately 30% this year — contrasting sharply with the S&P 500, where earnings have been adjusted upward by only around 10%. GuruFocus

The engine of that outperformance is not hard to locate. South Korea’s iShares MSCI South Korea ETF has risen 43.28% year-to-date, following a 96% surge in 2025. The broader MSCI Emerging Markets ETF has achieved its strongest relative surge against the S&P 500 since 2008 over the past two months. Euronews

The TSMC earnings report of April 16 crystallized what was already legible in the data. TSMC posted a 58% profit jump, its fourth consecutive quarter of record profits, driven by strong AI chip demand, with net income of NT$572.48 billion — representing a fourth consecutive quarter of record earnings. First-quarter revenue increased 35.1% year-over-year, while gross margin expanded to 66.2% and net profit margin reached a remarkable 50.5%. These are not the numbers of a company riding a hype cycle. They are the metrics of a structurally dominant monopolist at the apex of its pricing power — a position TSMC has earned through two decades of relentless capital discipline and engineering excellence. CNBCTSMC

Meanwhile, in the memory markets that underpin AI training and inference workloads, memory prices surged in 2025 and are expected to rise a further 40% through the second quarter of 2026, as demand shows no sign of abating. High-bandwidth memory — essential for training and running large AI models — faces particularly constrained supply, with SK Hynix and Samsung in the strongest position to benefit. CNBC


Why Asian Chipmakers Are the New Vanguard

Ask any hyperscaler where they source the silicon that makes their AI ambitions possible, and the answer invariably routes through Taiwan’s Hsinchu Science Park or South Korea’s Icheon. TSMC holds roughly 70% of the global foundry market and an even higher share of the most advanced nodes essential for Nvidia GPUs and custom AI chips from Google, Microsoft, and Amazon. In memory, SK Hynix leads with an estimated 50–62% share of the HBM market, thanks to early qualification wins with Nvidia and strong technical execution. International Business TimesInternational Business Times

This is not supplier dependency in the conventional sense. It is strategic chokepoint control. The AI boom — from hyperscaler data centers to edge inference in smartphones and automobiles — requires two ingredients above all others: leading-edge logic and high-bandwidth memory. Both are controlled by a handful of Asian firms with technological leads measured not in months but in years.

Asia’s top chipmakers plan to invest over $136 billion in capital expenditure in 2026, a 25% increase from 2025. TSMC alone plans a record $52–56 billion capex this year, a 27–37% increase, with 70–80% focused on advanced processes and advanced packaging. This level of investment, sustained across multiple players simultaneously, speaks to something more durable than a demand spike — it reflects the industry’s collective conviction that the AI infrastructure build-out has years, not quarters, left to run. DATAQUEST

The EM tech sector now accounts for 29% of the MSCI EM Index, with Asia home to globally competitive leaders across the AI value chain: foundry through TSMC, memory through SK Hynix and Samsung Electronics, IC design through MediaTek, and the broader hardware ecosystem including packaging, testing, and ODM. This is a complete industrial ecosystem, not a single-point dependency — a distinction that matters enormously when thinking about the durability of the current rally. GAM


From “Emerging” to “Essential”: The Re-Rating of EM Risk

The label “emerging markets” carries ideological baggage. It conjures images of currency crises, governance deficits, thin liquidity, and political instability — markets where a Yale endowment might allocate 5% of its portfolio for optionality and diversification, not conviction. That mental model, always an oversimplification, is now actively misleading.

Taiwan and South Korea have shot past Germany and France in equity market capitalization over the past seven months. As Fidelity International portfolio manager Ian Samson has noted, the rapid rise of Korea and Taiwan reflects the long-term megatrend of semiconductors as “the new oil” — the key input to economic activity — combined with the latest price-insensitive boom in AI investment. Taipei Times

What makes this re-rating structurally significant — rather than a repeat of the commodity supercycle mirages of the 2000s — is the nature of the earnings driving it. These are not resource rents dependent on Chinese construction demand or the whims of OPEC. They are technology rents derived from proprietary process nodes, decades of accumulated engineering capital, and customer relationships so embedded that switching costs are measured in years of qualification cycles. In Taiwan, technology-related goods now account for roughly 80% of exports, with revenue at TSMC continuing to track the island’s export momentum. Euronews

Capital markets are adjusting accordingly. The iShares MSCI Emerging Markets ETF attracted more than $4 billion in January 2026, its strongest month for inflows since 2015, with South Korea alone drawing $1.6 billion in January and over $1 billion in February. Institutional investors are not merely chasing momentum. They are correcting a structural underweight that persisted through years of “U.S. exceptionalism” narrative — a narrative that, with the S&P 500 trailing EM by more than 10 percentage points in 2026, looks increasingly threadbare. Euronews

There is a harder point to make here, and it deserves plain statement: the concentration of the world’s most critical semiconductor manufacturing outside the political borders of the United States — and outside the reach of U.S. export controls — represents not a vulnerability for investors, but an opportunity. Capital that was over-concentrated in a small cohort of American mega-cap technology names has begun the long process of diversification. The Magnificent Seven era of returns-without-risk was always a mirage. The current rebalancing toward Asian chipmakers is its corrective.


Why This Rally Matters for Global Investors

Featured snippet summary: Emerging market stocks are hitting record highs in 2026 primarily because TSMC, Samsung Electronics, and SK Hynix — which dominate the global AI semiconductor supply chain — are generating exceptional earnings growth. South Korea’s market is up over 43% year-to-date and has surpassed the UK in total market cap. Taiwan’s TAIEX has set consecutive record highs. The MSCI EM Index has outperformed the S&P 500 by more than 10 percentage points. Analysts have raised EM earnings forecasts by approximately 30% versus roughly 10% for U.S. equities. This is a structural, not cyclical, shift driven by irreplaceable AI hardware infrastructure concentrated in East Asia.


Risks and Realities: Geopolitics, Concentration, and the Dollar

Any honest account of this rally must grapple with its vulnerabilities, and they are real.

The most acute is geopolitical. Taiwan sits in one of the world’s most tensely contested straits, and the island’s equity market now trades at prices that embed optimistic assumptions about the continued stability of cross-strait relations. A serious escalation — even a rhetorical one — would reverberate instantly through global semiconductor supply chains and asset prices. There is no hedge that fully neutralizes this tail risk, and investors who pretend otherwise are engaged in motivated reasoning.

South Korea carries its own geopolitical freight, with a northern border that requires no elaboration. The KOSPI’s 44% year-to-date gain reflects immense confidence in structural AI demand — but that confidence coexists with security risks that Western pension fund trustees may be quietly re-examining.

Some investors have sounded caution about the outsized influence of tech stocks within local indexes: Samsung and SK Hynix account for a combined 42% of South Korea’s KOSPI, while TSMC makes up a similar proportion of Taiwan’s TAIEX. Index-level concentration of this magnitude creates the conditions for spectacular reversals. A single earnings miss, a customer dispute, or a technology stumble at any of these three companies would be amplified dramatically through passive index exposure. Taipei Times

The U.S. dollar dynamic cuts both ways. Dollar weakness in 2025–2026 has been a significant tailwind for EM assets — a weaker dollar makes emerging market assets cheaper for foreign buyers, directly boosting inflows and supporting local currency valuations, while simultaneously boosting dollar-denominated earnings for Korean and Taiwanese exporters. Should the Federal Reserve pivot more hawkishly than markets currently anticipate — or should the dollar stage a recovery driven by safe-haven demand amid global uncertainty — this tailwind could become a headwind with little warning. Ainvest

U.S. semiconductor export controls remain a persistent wildcard. Washington’s attempts to limit China’s access to advanced chips have, paradoxically, thus far accelerated rather than impeded the earnings growth of TSMC and SK Hynix, as Chinese demand redirects toward compliant suppliers and as the U.S. market for advanced AI accelerators balloons. But the next round of controls — targeting HBM specifically, or tightening restrictions on packaging services — could disrupt supply chain economics in unpredictable ways.

Finally, there is the broadening question. Early-2026 performance suggests that AI investment momentum is moving further down the technology stack, toward software-driven application AI and the rapidly emerging domain of physical-world AI. As AI applications broaden beyond the hyperscaler buildout phase into consumer and industrial deployment, the composition of winners will evolve. Foundry and memory players will remain essential, but their relative dominance within the AI value chain may moderate as software and application layers capture a growing share of the economic pie. GAM


Investment Implications for Global Portfolios

For sophisticated investors, several conclusions follow from this structural analysis.

The diversification case for EM tech is no longer theoretical. A portfolio overweight in the Magnificent Seven — Nvidia, Microsoft, Apple, Alphabet, Amazon, Meta, Tesla — carries an implicit bet on continued U.S. tech dominance at valuations that leave little margin for error. If investors shifted just 5% of U.S. allocations to emerging markets, the resulting capital could disproportionately re-rate smaller, more liquid markets and accelerate the entire trend. Many institutional investors are already making precisely this calculation. Ainvest

The selective approach matters. Within the broad EM tech complex, the risk-reward is not uniform. Leading-edge players — TSMC, SK Hynix, MediaTek — have durable competitive moats, demonstrated pricing power, and earnings trajectories anchored in multi-year hyperscaler capex commitments. Second-tier memory names, by contrast, have seen valuation multiples expand well beyond what earnings fundamentals justify, driven by retail trading momentum that historically precedes painful reversals.

Currency-hedged exposure deserves careful consideration. For investors in USD-denominated portfolios, the current dollar weakness is accretive to EM returns but introduces the symmetrical risk of reversal. Sophisticated allocators may wish to consider partial hedging strategies — though the cost of hedging Korean won or New Taiwan Dollar exposures has risen alongside the rally itself.

Finally, the geopolitical dimension argues for diversification within Asian EM tech itself, rather than concentrated bets on a single geography. Japan’s semiconductor equipment makers, India’s growing chip design ecosystem, and ASEAN-based assembly and test operations all offer exposure to the AI hardware buildout with differentiated risk profiles.


A New Chapter in Global Capital Flows

History rarely announces its turning points in advance. The decline of British industrial hegemony was not proclaimed in a single moment — it accumulated across decades of relative productivity decline, visible only in retrospect through the rearview mirror of economic history. The rise of American technological supremacy similarly played out across generations, culminating in the equity market exuberance that made Silicon Valley synonymous with the future itself.

What is happening in Seoul and Taipei today has the texture of another such transition. As recently as the end of 2024, the UK market was roughly twice the size of Korea’s. Today, they have crossed. South Korea’s KOSPI is up 44% in 2026, having already overtaken both Germany and France this year. Taiwan’s TAIEX has set consecutive all-time highs. TSMC’s Q1 2026 performance represents its eighth consecutive quarter of double-digit profit growth, driven by surging global demand for advanced AI processors and high-performance computing chips. Seoul Economic Daily + 2

The investors who are already repositioning understand something that the Wall Street consensus has been painfully slow to internalize: the AI revolution is not primarily a software story. It is a hardware story — a story about atoms as much as algorithms, about wafer fabs and memory stacks and advanced packaging as much as transformer architectures and foundation models. And that hardware story, at its productive core, is an Asian story.

The structural reordering of global capital is underway. It may be interrupted by geopolitical shocks, policy miscalculations, or the inevitable compression of valuations that follows any period of extraordinary outperformance. But the underlying shift — semiconductors as the essential infrastructure of the twenty-first-century economy, concentrated in East Asian firms with irreplaceable technological leads — is not reversible on any investment horizon that serious allocators should be contemplating.

The emerging markets that matter most are no longer emerging. They are, in the most literal sense, essential. The markets are finally beginning to price that reality accordingly.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

Analysis

San Francisco, AI Capital of the World, Is an Economic Laggard

Published

on

Artificial intelligence is creating unprecedented wealth at unprecedented speed. Its heartland is not.

On a drizzly Tuesday morning in the Mission District, a billboard advertising a generative AI platform — “Think Faster. Build Smarter. Scale Infinitely.” — towers over a sidewalk encampment where a dozen tents have been a fixture since 2022. Two blocks south, a gleaming co-working space charges $900 a month for a hot desk. Two blocks north, the food bank queue stretches past a mural of César Chávez. This is San Francisco in the age of artificial intelligence: a city simultaneously at the vanguard of history and strangely marooned by it.

The numbers are, by any reckoning, staggering. OpenAI is now valued at $300 billion, a figure that exceeds the GDP of most sovereign nations. Anthropic, its chief rival and fellow San Francisco resident, has attracted a cumulative $12 billion-plus in investment from Amazon and Google alone. Together with Databricks, Scale AI, and more than 90 other Bay Area AI unicorns — firms valued privately at over $1 billion — the region now hosts what economists at the Federal Reserve Bank of San Francisco have described as the most concentrated accumulation of venture-backed artificial intelligence capital in modern economic history. The Bay Area accounts for well over 60 percent of all U.S. AI venture investment, a ratio that has tightened rather than loosened as the boom has matured.

And yet San Francisco, the city itself, is struggling. Not in the polite way that prosperous cities occasionally describe mild slowdowns, but in measurable, sometimes painful ways that resist easy dismissal. Its office vacancy rate has hovered near 35 percent — the highest of any major American city — even as AI firms sign glossy leases in South of Market. The San Francisco Controller’s Office has reported persistent year-over-year declines in sales tax revenues from commercial corridors including the Tenderloin, Civic Center, and parts of SoMa. Overall city payroll employment remains below its 2019 peak. The city’s unemployment rate, which reached 6.1 percent in early 2024, has normalized but remains structurally elevated by the standards of the surrounding Bay Area. A Bureau of Labor Statistics analysis of metropolitan employment trends shows San Francisco County adding technology jobs at a rate significantly slower than Austin, Seattle, and even smaller metros like Raleigh-Durham — cities that lack anything approaching San Francisco’s density of AI valuation.

The paradox is not a curiosity. It is, I would argue, one of the defining economic puzzles of our era, and its resolution has profound consequences for how policymakers, urban planners, and civic leaders worldwide think about the geography of innovation.

The Boom That Doesn’t Boom

To understand why the AI wealth explosion has not translated into broad San Francisco prosperity, it helps to contrast the current moment with earlier technology cycles. The dot-com era of the late 1990s was, economically speaking, a mess — but it was a democratically distributed mess. Web startups hired copywriters, office managers, receptionists, catering staff, and building contractors in droves. The city’s employment base swelled. Restaurants in SoMa ran three seatings on weeknights. The construction crane became the defining civic symbol. When the crash came in 2001, it wiped out paper fortunes but had generated real intermediate employment across a wide swath of the local economy.

The social media boom of the 2010s was more capital-efficient, but its infrastructure still required armies of content moderators, trust and safety reviewers, logistics workers, and a sprawling class of middle-income tech employees — product managers, UX researchers, data analysts — who bought homes in Bernal Heights and spent meaningfully in neighborhood economies. As FRBSF economists noted at the time, each technology job in the Bay Area generated approximately five additional local jobs through multiplier effects: the phenomenon economists call the “local multiplier.”

The AI boom is structurally different, and that difference is not accidental. Frontier AI development is, by design, extraordinarily capital-intensive and astonishingly labor-light relative to the valuations involved. OpenAI employs roughly 3,500 people globally — a workforce smaller than many mid-tier law firms — while commanding a valuation that exceeds ExxonMobil. Anthropic employs fewer than 1,000. The economics are not those of the dot-com era, with its profligate hiring; they are closer to those of the oil industry, where massive capital pools concentrate wealth among small technical elites and equity holders while the multiplier effects to broader communities remain stubbornly thin. “These are platform technologies, not employment technologies,” as one prominent Bay Area economist, who requested not to be named due to relationships with venture-backed firms, put it to me. “The value accrues to the equity table. The city’s tax base doesn’t feel it the same way.”

The K-Shaped City

The bifurcation this creates has given rise to what urban economists increasingly call the “K-shaped” San Francisco — a local variant of the macroeconomic phenomenon that gained currency during the pandemic’s uneven recovery. At the top of the K, AI founders, early employees with equity, and venture capitalists are accumulating wealth at rates with few peacetime precedents. Median home prices in Pacific Heights and Noe Valley have crossed $2.2 million, sustained not by broad middle-class demand but by a thin layer of extraordinary earners bidding aggressively against one another for a constrained housing stock. A three-bedroom in the Inner Sunset now draws multiple offers above $1.8 million, primarily from engineers with restricted stock units in companies most Americans have never heard of.

At the bottom of the K, conditions are considerably bleaker. San Francisco’s homeless population — estimated by the 2024 Point-in-Time Count at over 7,000 individuals unsheltered on any given night — has not declined meaningfully despite years of city expenditure exceeding $700 million annually on homelessness programs. The San Francisco Unified School District is cutting programs amid declining enrollment, as middle-class families — the teachers, nurses, civil servants, and small business owners who once comprised the city’s civic backbone — are displaced to Contra Costa County, Sacramento, or out of the state entirely. The Mission District, historically the city’s Latino working-class heart, has seen commercial vacancy rates rise and longtime restaurants shutter, replaced by AI-adjacent amenity businesses — cold-brew concept cafés, biohacking studios, prompt-engineering bootcamps — that cater to a narrow professional stratum.

This is not merely a humanitarian concern. It is an economic one. Cities function as ecosystems, and the systematic displacement of intermediate-income households corrodes civic infrastructure in ways that eventually undermine even the elite economy they house. When a Financial Times analysis of U.S. innovation hubs found that cities with the highest income inequality consistently show lower rates of long-run per capita GDP growth, San Francisco’s trajectory begins to look less like a triumph of creative destruction and more like a case study in what economists call “extractive urbanism.”

The Geography of the New Boom

There is a further wrinkle that standard economic analysis tends to understate: the AI boom is not happening in San Francisco in the way that previous cycles were. It is happening near San Francisco, in ways that direct economic activity away from the city proper.

OpenAI’s headquarters are in Mission District, yes — but its massive new data center investments are in Texas and Iowa, where land is cheap and power is abundant. Anthropic’s principal offices are in San Francisco, but its computational infrastructure runs on AWS servers in Northern Virginia. The physical apparatus of AI — the chips, the cooling systems, the high-voltage power grids — is deployed wherever real estate and regulatory conditions are most favorable, which is almost never an expensive American coastal city. NVIDIA, the company that has perhaps done more than any other to make the AI boom possible, is headquartered in Santa Clara. Its revenue — now exceeding $130 billion annually — flows to shareholders and employees distributed globally, with relatively modest footprint in San Francisco’s commercial property or retail tax base.

Meanwhile, within the Bay Area itself, the center of gravity of AI office activity has shifted from the downtown Financial District — where vacancy remains cavernous — toward specific corridors in SoMa, Mission Bay, and increasingly to the Peninsula cities of Palo Alto and Menlo Park. This is consequential because San Francisco’s tax structure is highly sensitive to downtown commercial activity. The city’s gross receipts and payroll taxes, which generate a substantial portion of the general fund, correlate strongly with downtown office utilization. A CBRE market report from early 2026 found that while AI firms account for the majority of new San Francisco office leases by square footage, average lease sizes are modest — reflecting smaller headcount per dollar of valuation than any previous technology cycle — and many are structured as flexible or short-term arrangements that generate lower assessed values.

The Talent Paradox

The AI boom has also introduced a talent paradox that complicates simplistic narratives about technology creating broadly-shared prosperity. AI frontier labs do not hire broadly — they hire extraordinarily selectively. The competition for PhD-level machine learning researchers has driven starting compensation packages — salary, signing bonus, and equity — to levels that can exceed $1 million annually at OpenAI and Anthropic. These are not the figures of a democratized labor market. They represent the concentration of enormous economic rents into an extremely small professional cohort, most of whom were educated at a handful of elite universities and many of whom are not originally from San Francisco or even the United States.

For local workers without specialized AI credentials, the labor market effects are mixed at best and negative at worst. Research from the Brookings Institution suggests that AI automation is already displacing routine cognitive tasks in the Bay Area — in law, in finance, in customer service — faster than new AI-specific employment is being created for non-specialist workers. A legal secretary in a San Francisco firm, a junior financial analyst at a wealth management boutique, a graphic designer at a marketing agency: these roles are being restructured or eliminated at a pace that the AI boom’s most enthusiastic advocates rarely acknowledge. The net employment effect locally may be, for now, close to zero for workers without advanced technical qualifications — and negative in some sectors.

Policy Implications and the Risk of Imitation

San Francisco’s predicament carries urgent implications for the dozens of cities and regional governments worldwide that are racing to position themselves as “AI hubs” — from London’s Silicon Roundabout to Seoul’s Digital Innovation District, from Dubai’s AI Quarter to Paris’s Station F. The implicit logic of these initiatives is that concentrating AI capital and talent generates broad local prosperity. San Francisco’s experience suggests the causality is considerably weaker than assumed.

What might more inclusive AI urbanism look like? Several interventions merit serious consideration. First, taxation structures designed for an earlier technology era may be poorly calibrated for AI economics. A gross receipts tax that applies equally to a labor-intensive restaurant and a capital-intensive AI lab captures very different slices of economic activity. Policymakers in San Francisco — and elsewhere — should explore mechanisms that capture a larger share of the capital gains and equity appreciation generated by AI firms, rather than relying primarily on payroll and commercial activity taxes that AI firms generate only modestly.

Second, housing supply is not a peripheral concern. The bifurcated real estate market that AI wealth is intensifying actively destroys the intermediate-income households whose presence makes a city function. Serious upzoning — not the incrementalist versions that California has periodically attempted — combined with mandatory inclusionary requirements calibrated to actual construction costs, is an economic necessity, not merely a social preference.

Third, there is a role for proactive investment in AI-adjacent skills among existing residents. The notion that AI’s benefits will trickle down automatically is not supported by San Francisco’s data. Active reskilling programs, community college partnerships with AI firms, and apprenticeship models — of the kind that Germany’s Fraunhofer Institutes have pioneered for industrial technology — represent a more deliberate approach to inclusive AI growth.

The Longer View

It would be premature to conclude that San Francisco’s current economic weakness is permanent. Technology cycles are long, and second-order effects take time to materialize. The dot-com crash of 2001 looked, in the moment, like an economic catastrophe from which the city might never recover. A decade later, the mobile and social media boom had transformed San Francisco into one of the most dynamic urban economies in the world.

It is possible — perhaps even probable — that AI will eventually generate broader employment effects as the technology matures, as AI-native businesses proliferate beyond the frontier labs, and as demand for AI-enabled products and services creates new categories of work that are difficult to foresee today. Historians of technology, from Joel Mokyr to David Autor, have consistently found that transformative technologies ultimately create more employment than they destroy, even if the transition imposes severe distributional costs.

But the transition is the point. San Francisco is living through the transition right now, and its current management of that transition — the housing dysfunction, the displacement of intermediate-income households, the failure of AI wealth to flow through the city’s fiscal architecture — will determine whether the city emerges from this moment as a model or a cautionary tale.

The AI billboard in the Mission District promises to think faster, build smarter, scale infinitely. Below it, a man in a faded blue sleeping bag stirs as the morning fog burns off the Bay. San Francisco has always been a city of extraordinary distances between aspiration and reality. The AI boom has simply made those distances more visible, and the urgency of closing them more acute.

The world is watching. San Francisco, for its own sake and for the sake of every city that hopes to follow its model, would do well to notice.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending

Copyright © 2025 The Economy, Inc . All rights reserved .

Discover more from The Economy

Subscribe now to keep reading and get access to the full archive.

Continue reading