Analysis
How Beijing’s Block of Meta’s Manus Deal Is Redrawing the Map of Global AI
China’s rejection of Meta’s $2 billion acquisition of agentic AI startup Manus is not merely a regulatory footnote. It is a strategic declaration—one that signals AI has crossed a threshold from commercial asset to national sovereign resource, and that the old playbook for internationalizing Chinese technology is finished.
By the middle of 2025, a new term had entered the quiet lexicon of venture capitalists navigating the US-China technology rift: “Singapore-washing.” The concept was simple enough. A promising Chinese AI startup, wary of rising geopolitical risk, would shift its legal registration to a neutral Southeast Asian jurisdiction, relocate a portion of its team, raise Western capital, and present itself to global markets as a de facto global firm—one that happened to be founded by Chinese engineers but was now safely incorporated outside the dragon’s reach.
Manus looked, for a time, like the paradigmatic success story of this model. Founded as the Monica.im project by Beijing Butterfly Effect Technology, it released its first general-purpose AI agent in March 2025 to breathless comparisons with DeepSeek. The startup had passed $100 million in annual recurring revenue by December 2025—eight months after launching a product—claiming at the time to be the fastest company in the world to reach that milestone from zero. CNBC It had moved staff to Singapore mid-year, raised $75 million in a round led by Silicon Valley’s Benchmark, and attracted the attention of Meta Platforms, which announced a $2 billion acquisition in December 2025.
Then, on April 27, 2026, Beijing slammed the door.
China’s National Development and Reform Commission ordered the deal’s cancellation in a brief statement, citing laws and regulations, without elaborating further. Bloomberg The message, however, was anything but brief in its implications.
The Anatomy of a Blocked Deal
To understand why Beijing intervened, one must look past the Singapore address on Manus’s incorporation documents. According to Chinese regulatory findings, Manus’s core technologies were developed in China and involve processing massive amounts of user data. Its China-based affiliated entities—Beijing Red Butterfly Technology and Beijing Butterfly Effect Technology—remained active, and the technical origin and domestic entities had not been legally separated. TechNode
In Beijing’s legal reading, this was not a Singaporean company selling itself to an American buyer. This was Chinese intellectual property—developed on Chinese soil, using Chinese data infrastructure—attempting to depart through a side door without paying the sovereign toll.
Under China’s Measures for the Security Review of Foreign Investment, the Catalogue of Technologies Prohibited and Restricted from Export, and the Measures for Security Assessment of Data Export, core AI algorithms fall under restricted export technologies, requiring compliance with technology export licensing procedures and data security assessment requirements. TechNode The parties, regulators alleged, had circumvented these procedures entirely.
The human consequences were stark. The Financial Times reported in March that Manus co-founders Xiao Hong and Ji Yichao had been subjected to exit bans—barred from leaving China as the investigation deepened. Meanwhile, Manus employees had already moved into Meta offices in Singapore, capital had been transferred, and exiting investors including Tencent, ZhenFund, and Hongshan had received their proceeds. Business Standard Unwinding the deal, as the NDRC now demands, will be legally and logistically Byzantine.
Singapore-Washing: A Model Under Siege
The Manus affair has exposed what was always the fundamental vulnerability in the Singapore-washing strategy: geography is not sovereignty. A legal address in one of the world’s most business-friendly jurisdictions cannot override the reality of where technology was conceived, where engineers were trained, and where the underlying data originates.
Beijing is sending a message, and it reads clearly: residency in Singapore will no longer insulate Chinese-founded companies from regulatory scrutiny, forcing founders to choose between Western capital and Chinese ties. Yahoo Finance
This represents a seismic shift in the operating assumptions of Chinese entrepreneurship over the past decade. The variable interest entity (VIE) structure, red-chip listings, offshore incorporation, talent relocation—these were all instruments of a system that allowed Chinese innovation to access global capital while remaining, in practice, deeply embedded in China’s domestic ecosystem. Beijing tolerated this ambiguity when the technology in question was consumer internet. It is no longer prepared to do so when the asset in question is frontier artificial intelligence.
The reasoning, while heavy-handed in execution, is not without internal logic. According to Stanford University’s 2026 AI Index report, the long-standing performance gap between US and Chinese AI systems has effectively disappeared, with models from both countries now competing at comparable levels—and Chinese systems like DeepSeek-R1 at times matching leading US models. International Business Times When you are operating in a near-peer technological competition with the world’s largest economy, allowing your most valuable intellectual assets to be acquired by a direct rival is not merely bad commercial strategy. From Beijing’s perspective, it is a national security failure.
The Broader Crackdown: Capital Controls Enter the AI Age
The Manus block did not arrive in isolation. It is the visible tip of a rapidly submerging policy architecture.
Chinese agencies including the NDRC have told several private firms in recent weeks that they should reject capital of US origin in funding rounds unless explicitly approved. Yahoo Finance The scope of these instructions is remarkable. Moonshot AI, which is considering an initial public offering, was among those that received guidance from the state planner. Fellow startup StepFun received similar instructions. Regulators have also decided on similar restrictions for ByteDance, the owner of TikTok, blocking secondary share sales to US investors without government approval. Yahoo Finance
Consider the scale of disruption this implies. Moonshot AI is seeking to raise as much as $1 billion in a funding round that would value the startup at approximately $18 billion. StepFun, which is considering a $500 million float in Hong Kong, is in the process of unwinding its overseas entities and onshoring capital to meet regulatory requirements. Yahoo Finance
The new restrictions risk further isolating China’s recovering tech sector from the venture backing that has underpinned it for two decades, much of which was sourced from American pensions and endowments. Yahoo Finance The parallel move to restrict red-chip firms from seeking Hong Kong IPOs simultaneously closes off two of the primary channels through which Chinese companies have historically accessed Western capital markets. The architecture of financial globalization that powered China’s technology rise is being deliberately disassembled, brick by brick.
A Mirror Image: The Decoupling Is Now Bilateral
To appreciate the full symmetry of what is happening, one must look westward as well. Washington did not wait for Beijing to move first. The US Treasury Department finalized rules in 2025 restricting American outbound investment into Chinese companies operating in AI, semiconductors, and quantum computing. Congressional proposals including the COINS Act have sought to formalize and expand these restrictions. The Committee on Foreign Investment in the United States (CFIUS) has increasingly scrutinized technology transactions involving Chinese counterparties, regardless of where the formal acquirer is registered.
In other words, Beijing’s move is not an unprovoked act of protectionism. It is, in considerable measure, a mirror response to the regime Washington has been constructing for years. China’s escalation mirrors steps taken by Washington months earlier, when US authorities restricted outbound investment into Chinese companies operating in AI, semiconductors, and quantum computing on national security grounds. FX Leaders Both governments now operate from the same strategic premise: advanced artificial intelligence is not a commercially neutral technology. It is a lever of national power, and allowing adversaries to access it—through acquisition, investment, or talent migration—is a strategic error.
What has changed, dramatically and perhaps irreversibly, is the speed at which this mutual calculus is hardening. The Manus affair compressed years of latent tension into months of regulatory escalation. What once required a geopolitical incident to trigger can now be set off by a single startup’s term sheet.
The Chilling Effects on Innovation and Entrepreneurship
None of this, however, occurs without cost—and Beijing’s calculus, however strategically coherent, carries real dangers for the ecosystem it claims to protect.
The founders of Manus did not set out to betray China. They built a remarkable technology, attracted global capital, and attempted to navigate a world in which their homeland’s commercial and regulatory environment had become increasingly inhospitable to internationally ambitious startups. The exit bans imposed on Xiao Hong and Ji Yichao—whatever the legal justification—send a chilling signal to every talented Chinese engineer who might contemplate building for a global market.
Chinese government scrutiny of AI companies could impact other Chinese startups’ strategies for expansion and funding in the United States. The Washington Post The subtler damage is harder to measure but likely deeper: the erosion of the entrepreneurial confidence that produced companies like Manus in the first place. If the reward for building a breakout AI startup in China is an exit ban and a forced transaction reversal, the rational response for ambitious engineers is either to stay entirely within the domestic system—or to leave China earlier and more completely than Manus ever did.
Neither outcome serves Beijing’s long-term interests. The first risks insulating Chinese AI development from the competitive pressure that drives frontier innovation. The second accelerates precisely the kind of talent drain the crackdown is ostensibly designed to prevent.
Implications for Global Investors and the Future of Agentic AI
For the global venture capital community, the Manus block is a hard lesson in jurisdictional risk. The Singapore-registration playbook worked, until it didn’t. Benchmark, which led Manus’s Series A at a $500 million valuation, now finds itself holding an asset caught in one of the most complex geopolitical unwindings in recent startup history. Future investments in companies with significant Chinese technical heritage will require a level of regulatory due diligence that venture firms have historically neither staffed nor priced.
The specific domain of agentic AI—autonomous systems capable of executing complex multi-step tasks across diverse environments—makes this regulatory conflict particularly consequential. Agentic AI is widely viewed as the next major frontier of commercial AI deployment, with applications spanning enterprise automation, scientific research, and consumer productivity. When Meta announced the Manus deal, it said it would look to accelerate AI innovation for businesses and integrate advanced automation into its consumer and enterprise products, including its Meta AI assistant. CNBC The block does not merely deprive Meta of a team and a technology. It delays and fragments the development of a genuinely transformative technology category at the precise moment when the competitive race is most intense.
Recommendations: Navigating the New Landscape
The structural forces driving this bifurcation are not going away. But they need not produce an outcome that is worse for everyone. Several principles should guide policymakers, investors, and technologists in the period ahead.
For governments: Both Washington and Beijing should establish clearer, more predictable frameworks for cross-border technology investment reviews. The opacity of China’s current approach—a one-line NDRC statement, exit bans without charges, informal guidance to portfolio companies—creates uncertainty that harms legitimate commercial activity far beyond the specific deals under scrutiny. Rules that are knowable in advance are less disruptive than rules that arrive by surprise.
For investors: Structural due diligence must now include technology provenance analysis—understanding not just where a company is registered, but where its core intellectual property was developed, where its data originates, and whether its founding team could face legal constraints in either jurisdiction. Geography of incorporation is no longer a sufficient proxy for legal exposure.
For technology companies: The era of the “borderless startup” in AI is functionally over. Companies with genuine global ambitions must make earlier, cleaner decisions about their primary regulatory home. Ambiguity that was once commercially convenient has become a liability that can be weaponized by regulators on either side of the Pacific.
The Longer View
History will likely record the Meta-Manus episode as one of those moments when the underlying logic of an era became suddenly, viscerally legible. For the better part of two decades, the world operated on the assumption that technology and capital were, at their core, cosmopolitan forces—that they would flow toward talent and opportunity regardless of national boundaries, and that this flow was ultimately good for everyone.
That assumption is not dead. But it is seriously wounded. As the NDRC tightens its grip, the digital arteries connecting US and Chinese tech sectors are being severed, one funding round at a time. TechStory
The Manus block is not the end of Chinese AI innovation—China’s engineers are too numerous, too talented, and too well-supported by state capital for that. Nor is it the end of Meta’s ambitions in agentic AI. But it is the end of the comfortable fiction that the US-China technology competition could be navigated by clever corporate structuring. The battle lines are now drawn at the level of technology itself—who built it, where, with whose data, and for whose benefit.
In that contest, there are no neutral flags of convenience, and Singapore is no longer far enough away.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
Analysis
Walmart’s New Streaming Stick Is the Quiet Disruption Big Tech Didn’t See Coming
The Onn 4K Streaming Stick doesn’t arrive with fanfare. It doesn’t need it.
There were no press invites. No breathless product launches livestreamed to a million viewers. No carefully rehearsed executives in black turtlenecks. Sometime in early April 2026, a Reddit user in Texas walked into their local Walmart, spotted a compact HDMI dongle on the shelf — the Onn 4K Streaming Device — and bought it for roughly $30. Within days, the post had gone viral in streaming enthusiast circles. By week two, benchmark sites had torn it apart. By week three, analysts were quietly asking a question that felt almost impertinent: Has Walmart just upended the streaming hardware market without saying a single word about it?
The answer, this columnist argues, is essentially yes — and the implications run deeper than silicon and software.
The Walmart new streaming stick is not a toy. It is not a charity product or a loss leader dressed in plastic. It is, beneath its understated exterior, a pointed statement about who owns the future of home entertainment, how accessible that future should be, and whether Silicon Valley’s approach to streaming hardware — iterative, incremental, and increasingly expensive — is starting to run out of road.
The Spec Sheet That Should Make Roku Nervous
Let’s begin with the basics, because the basics are where this story gets interesting.
The Onn 4K Streaming Device (2026) — Walmart’s first-ever 4K streaming stick, as opposed to its existing set-top boxes — runs Google TV, supports 4K Ultra HD resolution, decodes AV1, delivers Dolby Atmos audio, and ships with a voice remote that puts Google’s Gemini assistant at the tip of your tongue. Under the hood, it is powered by a Realtek RTD1325 processor with a quad-core 1.7 GHz ARM Cortex-A55 CPU and an ARM Mali-G57 GPU, paired with 2GB of RAM and 8GB of storage. Connectivity is handled via dual-band Wi-Fi 5 and Bluetooth 5.2. Power and accessories run through a single USB-C port — a welcome upgrade from the Micro-USB common on budget devices of a generation ago.
The price? Approximately $19.88 to $30, depending on store location and timing.
Compare that to its nearest competitors. The Amazon Fire TV Stick 4K Plus retails at roughly $50 and, in benchmark testing conducted by AFTVNews, outperforms the Onn 4K Stick by approximately 15 percent in raw processing power. The Roku Streaming Stick 4K sits at a similar price tier. And Google’s own Chromecast successor, the Google TV Streamer, costs $79.99 — a device that the newer, pricier Onn 4K Pro (2026) reportedly bests in benchmark performance at two-thirds the price.
The Onn 4K Stick, to be precise, is not the fastest device on the market. It trades raw horsepower for something arguably more valuable in 2026: radical affordability at 4K capability. For tens of millions of households who want to upgrade an aging 4K television without committing to a $50–$80 streaming device, this stick represents a genuinely new entry point.
The Unremarkable Launch That Says Everything
The way Walmart launched — or rather, didn’t launch — the Onn 4K Streaming Stick is itself a lesson in retail philosophy.
There was no announcement. No coordinated press push. Units simply appeared in select stores, were purchased by curious early adopters, photographed, shared on Reddit and YouTube, stress-tested by enthusiast communities, and covered by tech outlets weeks before Walmart acknowledged the product’s existence online. As of late April 2026, the company’s website listings for the device have only recently gone live for most users, and a formal launch is still pending in many markets.
This is not an accident. Walmart has a documented pattern of soft-launching Onn devices — the 4K Plus, the previous 4K Pro — in exactly this manner. But the effect goes beyond mere supply chain staggering. What Walmart achieves through this approach is something more valuable in the attention economy: organic credibility. When a product is found rather than marketed to you, when enthusiasts dissect it of their own volition, when the first reviews come from real buyers rather than brand ambassadors, the resulting coverage is qualitatively different. It reads as discovery. It feels like truth.
For a company that has struggled — as all major retailers have — to position itself as a technology innovator rather than a discount warehouse, that credibility matters enormously.
The Real Competition: Not Amazon or Roku, But the Cost of Streaming Itself
Here is the context that most reviews of the Onn 4K Stick have missed, buried as they are in chipset comparisons and frame-rate analyses.
The average American household now pays more than $100 per month in combined streaming subscriptions. Between Netflix, Disney+, Max, Peacock, Paramount+, Apple TV+, and the array of sports streaming services that have migrated from traditional cable — the economics of cord-cutting no longer deliver the savings they once promised. The great unbundling of cable television, celebrated as a consumer liberation a decade ago, has quietly re-bundled itself at roughly the same price, minus the sports and local news that many viewers actually want.
In this context, hardware costs matter more than they used to. When you are already paying $120 a month in subscriptions, the difference between a $30 streaming stick and an $80 one isn’t trivial. It’s three weeks of a streaming service. It’s a family dinner. It’s the kind of money that is genuinely meaningful to the median American household — whose real income has grown modestly while its entertainment bill has expanded considerably.
Walmart understands this arithmetic better than almost any other technology distributor on earth. Its core customer — middle-income, value-conscious, deeply embedded in the service’s ecosystem through Walmart+ — is precisely the person for whom a $30 4K streaming stick isn’t a compromise. It’s the right choice.
This is why the Onn 4K Streaming Device should not be read as a product primarily competing with the Fire TV Stick or Roku. It is, at a deeper level, competing with the psychological friction of streaming itself — the sense that premium home entertainment requires ongoing premium investment. It argues, in silicon and software, that it doesn’t.
Google TV’s Unlikely Beneficiary
There is a secondary story here, equally significant, about the fate of Google TV as a platform.
Google’s own streaming hardware ambitions have had a complicated decade. The original Chromecast redefined how people thought about wireless media casting. The Chromecast with Google TV 4K, launched in 2020, was a genuine breakthrough. But subsequent iterations have been incremental, overpriced relative to their performance, and undermined by the quiet sidelining of the Chromecast brand itself — which Google has, for all practical purposes, discontinued as a named product line.
Into this vacuum have stepped third-party manufacturers running Google TV. And of those manufacturers, Walmart’s Onn brand has become, arguably, the most consequential champion of the platform in the United States. The new Onn 4K Stick ships with Gemini pre-installed as the default AI assistant — positioning Google’s latest AI offering not on a Google-branded device, but on a $30 Walmart dongle. The irony is sharp, and entirely intentional on Google’s part: they need distribution, and Walmart provides it at a scale no tech company can match organically.
Google TV now reaches more homes through Onn than through its own hardware. That is a remarkable state of affairs, and it speaks to the fundamental restructuring of the streaming platform wars — where the battle is no longer primarily about hardware design but about operating system reach and data access.
For Google, every Onn device activated is a Google account signed in, a voice search conducted, a YouTube Premium promotion delivered, a Google Play purchase made. The economics of platform distribution have never been clearer: it is better to be the operating system on a $30 device in 50 million homes than the premium hardware in 5 million living rooms.
What the Onn 4K Stick Does Well — and Where It Falls Short
Balanced analysis demands honesty. The Onn 4K Streaming Device has real strengths, but also real limitations worth examining carefully before purchase.
Strengths:
- Price-to-feature ratio: At $30, the combination of 4K output, Dolby Atmos, AV1 decoding, Google TV, and Gemini assistant is genuinely difficult to match in the market.
- Google TV ecosystem: Access to the Google Play Store, 700,000+ movies and shows, 10,000+ apps, and 1,700+ free live TV channels — all unified under Google TV’s content-aggregation interface — represents a vast and well-maintained ecosystem.
- USB-C power: The upgrade from Micro-USB is functionally significant; USB-C is universal, durable, and future-proof at this price point.
- Gemini integration: AI-powered search and discovery on a budget device is a meaningful differentiator as voice control becomes increasingly central to how viewers navigate fragmented content libraries.
- AV1 decoding: Support for this next-generation codec, used by YouTube, Netflix, and others for superior compression efficiency, suggests the device is built with at least some longevity in mind.
Weaknesses and Caveats:
- Benchmark performance gap: As AFTVNews benchmarking confirms, the Onn 4K Stick trails the Fire TV Stick 4K Plus by approximately 15 percent in raw processing power, and the Xiaomi TV Stick 4K by around 27 percent. For casual viewers, this gap will be invisible. For those who run multiple apps simultaneously or demand instantaneous UI response, it may be perceptible.
- No Dolby Vision: Unlike the Onn 4K Pro, the stick variant does not appear to support Dolby Vision HDR — a meaningful omission for viewers with Dolby Vision-capable televisions who wish to see colour at its most accurate.
- Limited storage: 8GB is functional but not generous. Aggressive app installers will feel the constraint.
- Build quality unknowns: Walmart has not publicized third-party quality certification data, and early user reports — while generally positive — come from a limited sample. Long-term durability remains an open question.
- Software update longevity: This is, for this analyst, the most significant unknown. Budget devices from retail brands have a mixed history of OS support. Whether Walmart commits to multi-year Android security patches and Google TV updates for the Onn 4K Stick will determine its value proposition considerably.
A Comparison Worth Making
| Device | Price (approx.) | Resolution | Dolby Vision | Dolby Atmos | RAM | Storage | Platform |
|---|---|---|---|---|---|---|---|
| Onn 4K Streaming Stick (2026) | ~$30 | 4K UHD | ❌ | ✅ | 2GB | 8GB | Google TV |
| Amazon Fire TV Stick 4K Plus | ~$50 | 4K UHD | ✅ | ✅ | 2GB | 8GB | Fire OS |
| Roku Streaming Stick 4K | ~$50 | 4K UHD | ✅ | ✅ | — | — | Roku OS |
| Google TV Streamer | ~$80 | 4K UHD | ✅ | ✅ | 4GB | 32GB | Google TV |
| Onn 4K Pro (2026) | ~$60 | 4K UHD | ✅ | ✅ | 3GB | 32GB | Google TV |
The table is instructive. At $30, the Onn 4K Stick competes meaningfully — even if not identically — with devices costing significantly more. For first-time 4K upgraders, secondary television rooms, student apartments, or households prioritizing subscription costs over hardware investment, the calculus tilts clearly in Onn’s favour.
The Walmart Advantage: Distribution as Strategy
There is a dimension to this story that is almost never discussed in gadget-focused coverage: the strategic significance of Walmart’s physical retail footprint.
Walmart operates approximately 4,600 stores in the United States. It reaches more American communities — including rural towns where broadband infrastructure and consumer electronics options are limited — than any other retailer on earth. When Walmart puts the Onn 4K Stick on its shelves, it doesn’t just sell a product. It introduces the possibility of 4K streaming to communities that may have no Best Buy, no Target with a substantial electronics section, and whose residents may not routinely shop technology on Amazon.
This is the dimension that gives the Walmart new streaming stick genuine cultural significance. In an era when the digital divide — between households with rich, full-spectrum media access and those without — remains a live and serious challenge, a $30 4K streaming device distributed through 4,600 stores is not merely a consumer product. It is infrastructure, of a kind. Not perfect infrastructure, not a complete solution to the access problem, but a meaningful step in the direction of equalization.
Entertainment, particularly in times of economic stress, functions as more than leisure. It is social cohesion. It is cultural participation. It is, in households with children, an educational resource. The democratization of access to it — even imperfectly, even with caveats — matters in ways that benchmark scores cannot quantify.
The Broader Reckoning for Streaming Hardware
The Onn 4K Stick’s emergence coincides with what appears to be a genuine inflection point in the streaming hardware market.
Amazon’s Fire TV has slowly drifted away from Android in favour of its proprietary Fire OS — a decision that has constrained sideloading capabilities and made the platform more walled than it was in its earlier, more open years. Roku, for all its interface elegance, operates a closed ecosystem with limited customization. Google’s own hardware ambitions, as noted, have stalled. Apple TV 4K remains premium, powerful, and priced accordingly for a market segment that is not expanding.
Into this landscape comes an open, Google TV-powered device, sold through the world’s largest retailer, at a price point that functionally removes cost as a barrier to 4K streaming adoption. That is a meaningful competitive event — not merely a product launch.
The incumbents are not blind to this. Amazon’s Fire TV team will have seen the benchmark numbers. Roku’s strategists will have noted the price. But the structural advantage Walmart possesses — its supply chain, its store network, its customer relationships, and its willingness to use hardware as a tool of ecosystem building rather than a profit centre in itself — is not easily replicated by companies whose hardware divisions are expected to be standalone businesses.
The Question No One Is Asking Yet
As this columnist writes, the Onn 4K Streaming Stick is still making its way to store shelves nationwide, its official launch yet to be formally announced. In a few weeks, it will be reviewed comprehensively, benchmarked exhaustively, and discussed at length on every major technology platform.
Most of that coverage will focus on the right questions: Is the picture quality good? Does the remote feel cheap? Will it handle Netflix 4K without buffering?
But the question worth sitting with — the one that this particular product, at this particular moment, forces into view — is a different one entirely.
What does it mean when the most consequential advancement in the democratization of premium streaming comes not from a Silicon Valley lab or a Big Tech product event, but from the electronics shelf of a big-box retailer, launched without a press release, discovered by a Reddit user in Texas?
It means, perhaps, that the future of accessible technology has always been less about innovation and more about distribution. Less about the bleeding edge and more about the trailing hundreds of millions. Less about who can make the most sophisticated device and more about who can make a good-enough device available to everyone, everywhere, at a price that asks nothing of them beyond showing up.
Walmart has been doing that for seventy years. The Onn 4K Streaming Stick is simply the latest, most quietly radical expression of it.
The streaming wars, it turns out, may not be won by the company with the best algorithm or the most exclusive content. They may be won by the company with the most parking spaces.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
Analysis
The Giant Stirs Again: How Falcon Heavy’s Return and the ViaSat-3 Constellation Signal a New Chapter in the Satellite Broadband Wars
SpaceX’s Falcon Heavy returns to flight on April 27, 2026, launching the ViaSat-3 F3 Asia-Pacific satellite from LC-39A. Only its 12th mission in history, this rare flight completes Viasat’s global broadband constellation and reshapes the GEO vs. LEO satellite broadband competition. Here’s what it means for the new space economy.
At 10:21 a.m. Eastern Time on Monday, April 27, 2026, the most powerful operational commercial rocket on Earth — and one of its rarest fliers — ignites its twenty-seven Merlin engines simultaneously at Kennedy Space Center’s storied Launch Complex 39A. The ground shakes the way the ground is supposed to shake near a rocket: not from a single source, but from a column of fire wide enough to seem geological, to seem geological. Falcon Heavy’s triple-core frame, generating more than 5.1 million pounds of thrust, clears the tower in a wall of sound. Then, minutes later, comes the signature spectacle — two side boosters separating and wheeling back toward Cape Canaveral in precise, mirror-image arcs, landing on Landing Zone 2 and Landing Zone 40 with the kind of choreography that still, somehow, feels impossible. The central core flies on, burns everything it has left, and falls into the Atlantic. Its sacrifice is the price of orbiting a six-metric-ton satellite to geostationary transfer orbit.
This is Falcon Heavy’s twelfth flight in its eight-year operational life. Twelve. The number is almost deliberately understated for a vehicle of this capability. And that rarity — the extended eighteen-month hiatus since its previous mission, NASA’s Europa Clipper in October 2024 — is itself a story worth telling, because it reveals as much about where the commercial space economy is heading as the launch it frames.
A Rocket Reserved for Giants
Understanding why Falcon Heavy flies so seldom requires understanding what it is and what it isn’t. Falcon Heavy is not SpaceX’s everyday workhorse; that role belongs to Falcon 9, which has become perhaps the most routinely astonishing piece of engineering in contemporary aviation history, completing an extraordinary 165 launches in 2025 alone. Falcon Heavy is something else: a vehicle summoned for missions too massive, too energetic, or too classified for a standard Falcon 9 to handle. It is the draft horse you bring out when the load demands it and put back in the barn when ordinary work resumes.
At a listed price of approximately $97 million per launch in its reusable configuration — and roughly $150 million in fully expendable form — Falcon Heavy is already a relative bargain compared to the now-retired Delta IV Heavy, which cost ULA customers between $350 and $400 million per flight. But the market for truly heavy payloads simply isn’t large enough to sustain monthly cadence, and SpaceX has never pretended otherwise. The vehicle was designed for a specific tier of mission: very large commercial communications satellites, deep-space science flagships too heavy for a single Falcon 9, and high-orbit national security payloads demanding maximum throw weight. When those missions come, Falcon Heavy flies. When they don’t, it waits.
What brings it back today is the final satellite of Viasat’s ambitious ViaSat-3 program: the ViaSat-3 F3 spacecraft, destined for the Asia-Pacific region, built by Boeing, and configured with a Ka-band payload designed to add more than one terabit per second of broadband capacity to Viasat’s global network. At approximately 6.6 metric tons, ViaSat-3 F3 is too heavy for a Falcon 9 to lift to the transfer orbit Viasat needs — particularly one favorable enough for the satellite’s electric propulsion to complete the journey to geostationary orbit on a reasonable timeline. As confirmed by Viasat’s own leadership, Falcon Heavy’s superior performance means the spacecraft can be delivered to an orbit just below geostationary apogee with only about three degrees of inclination — cutting weeks off the months-long electric orbit-raising process compared to what an Atlas V delivery required for ViaSat-3 F2.
The Mission in Detail: Engineering a Global Network
The technical architecture of this mission rewards attention, because it illustrates exactly why some satellite programs still require the big rocket rather than the commercially expedient one.
ViaSat-3 F3 will be deployed to geosynchronous transfer orbit — an elliptical orbit with a perigee in the low tens of thousands of kilometers and an apogee near geostationary altitude — approximately five hours after liftoff from LC-39A. From there, the spacecraft’s all-electric propulsion system takes over, gradually raising and circularizing the orbit over the course of roughly two months until ViaSat-3 F3 arrives at its reserved slot at 158.55 degrees East longitude, directly above the Pacific Ocean at geostationary altitude of 35,786 kilometers. Once in position, Viasat expects rigorous bus and payload testing before a commercial service entry expected by late summer 2026.
The satellite itself is a remarkable piece of engineering: a fully flexible Ka-band broadband spacecraft designed to direct its capacity dynamically, rather than assigning fixed amounts of spectrum and power to fixed geographic beams as earlier generations of GEO satellites did. In the words of Viasat’s vice president of space systems, Dave Abrahamian, the constellation’s hallmarks are “a huge amount of absolute capacity, but also the flexibility to put it wherever you need it, whenever you need it.” Traditional satellites — including Viasat’s own earlier generations — operate more like fixed highway lanes: once built, the bandwidth goes where the beams point, regardless of where demand actually flows on any given day. ViaSat-3 F3 is architected to be more like a managed network, allocating spectrum and power dynamically in response to real-time demand.
This flexibility matters enormously for the commercial aviation market, which constitutes one of Viasat’s primary revenue streams. Airline routes shift seasonally and commercially. Demand spikes during peak travel periods and across high-traffic corridors. A satellite that can concentrate capacity over the North Pacific during the morning push and redistribute it over Southeast Asian leisure routes in the afternoon represents a fundamentally different commercial proposition than one locked into static beam patterns.
For the booster side of the mission, SpaceX will fly side boosters B1072 and B1075 back to Cape Canaveral Space Force Station, landing at LZ-2 and the recently commissioned LZ-40 respectively. B1075 carries a flight heritage that includes SDA orbital transport missions, multiple Starlink deployments, and an international synthetic aperture radar spacecraft. Their recovery is not merely theater — it is the economic logic underlying SpaceX’s cost model, allowing the amortized cost of booster manufacturing to be spread across multiple flights. The central core, carrying nothing but a nearly empty propellant load by the time it has done its work, will be expended — a trade-off SpaceX has consistently made on GTO missions demanding maximum performance from the vehicle’s core stage.
Completing the Constellation: What ViaSat-3 F3 Means for Viasat
The ViaSat-3 program has not had an easy journey. When ViaSat-3 F1 arrived in orbit in May 2023, engineers discovered an antenna deployment anomaly that severely constrained the satellite’s throughput — reducing it to an estimated 5 to 10 percent of its intended capacity. For a company that had bet heavily on this generation of satellites to compete against the rising LEO constellations, the setback was consequential. Customers noticed. Starlink, with its terrestrially-derived latency characteristics and rapidly growing coverage, captured aviation connectivity contracts that Viasat had hoped to retain.
The setback also complicated Viasat’s financial position at a moment when the company was simultaneously integrating its transformative 2023 acquisition of Inmarsat — a deal that expanded the company’s maritime and government connectivity business dramatically but also loaded the balance sheet. ViaSat-3 F2, the second spacecraft in the constellation targeting the Americas and EMEA regions, flew on a ULA Atlas V and has been progressing through in-orbit testing, with its reflector deployment now completing after challenges posed by the spring eclipse season. As Viasat’s latest confirmation notes, F2’s final deployments are expected to complete over the coming weeks — meaning the company is, finally, beginning to see its multi-year, multi-billion-dollar satellite program deliver on its intended architecture.
ViaSat-3 F3 completing the constellation closes a strategic gap that has left Viasat without full global high-throughput coverage since the program began. The Asia-Pacific region — home to some of the world’s busiest aviation corridors, fastest-growing maritime trade routes, and largest underserved broadband markets — has been waiting for this capacity. As Abrahamian told Spaceflight Now, “We have a number of airline customers in the APAC region that are really anxious to get this capacity online so they can start serving their customers better.” When F3 enters service, the ViaSat-3 constellation will represent a genuinely global, high-capacity, dynamically flexible broadband network — something no single competitor can claim across every orbit regime.
The Broadband Wars: GEO Renaissance or Rearguard Action?
Here is where the analysis must become honest about the headwinds rather than merely celebrating the engineering achievement.
Viasat’s strategic context is brutal. Starlink has grown to more than two million subscribers, and its low-Earth orbit architecture delivers latency characteristics — typically below 40 milliseconds — that geostationary satellites, orbiting at altitudes 60 times higher, cannot physically replicate. The laws of physics impose a minimum round-trip delay of roughly 550 milliseconds on GEO communications; for most broadband applications this is acceptable, but for latency-sensitive traffic including video conferencing, interactive gaming, and real-time financial transactions, it represents a structural disadvantage no amount of throughput can fully compensate.
Amazon’s Project Kuiper presents a different competitive threat: well-capitalized, backed by Amazon Web Services infrastructure, and designed from the outset for the enterprise and consumer markets where Viasat has historically been strongest. Kuiper has struggled with deployment pace — the program had launched only 78 satellites by mid-2025, far behind the FCC’s schedule — but Amazon’s financial resources and strategic motivation to protect its cloud business by owning connectivity infrastructure represent a long-term competitive pressure that will not diminish.
And yet. It would be a mistake to write GEO satellites out of the connectivity story, for several reasons that the ViaSat-3 program crystallizes.
First, coverage economics. A single geostationary satellite at 35,786 kilometers altitude covers roughly one-third of the Earth’s surface. A LEO constellation providing equivalent global coverage requires hundreds to thousands of individual spacecraft, each with a design life measured in years rather than decades. The capital efficiency of GEO for serving large geographic areas — particularly over oceans and sparsely populated territories where ground infrastructure is limited — remains compelling. ViaSat-3 F3’s coverage of the Asia-Pacific region, from a single orbital position, encompasses an area that would require a significant fraction of a LEO constellation to replicate.
Second, the defense and government market. Viasat has historically derived substantial and growing revenue from U.S. and allied government customers who value the satellite’s dedicated capacity, security architecture, and the ability to integrate with existing military communication networks. ViaSat-3 F3 explicitly introduces “new forms of resilience for US and international government customers,” per Viasat’s official launch confirmation. The national security satellite broadband market values characteristics — including resistance to jamming, controlled access, and sovereign oversight — that a commercially operated LEO megaconstellation does not automatically provide.
Third, the multi-orbit future. The most sophisticated satellite operators today are not choosing between GEO and LEO. They are building hybrid architectures that leverage the throughput and geographic efficiency of GEO alongside the latency characteristics of LEO, using intelligent ground terminals and network management to route traffic dynamically. Viasat’s own NexusWave service integrates its GEO capacity with OneWeb’s LEO network for maritime customers. The ViaSat-3 constellation, as it reaches full operational capability, becomes a cornerstone of this hybrid strategy rather than a standalone product competing head-to-head against Starlink on latency.
The Economics of Reusability and the Launch Market’s Quiet Monopoly
Step back from the satellite payload for a moment and consider the launch vehicle. Falcon Heavy’s twelfth flight in eight years is, by any conventional measure, an extremely low flight rate for a rocket of this capability. Yet SpaceX has maintained a 100 percent mission success rate across all twelve flights, and the booster recovery on dual RTLS missions has become so routine that it barely registers as remarkable. This combination — extreme reliability at very low cadence — reflects a deliberate commercial strategy that deserves scrutiny.
There is, in practical terms, no alternative to Falcon Heavy in the current market for very large GEO satellites requiring maximum performance to orbit. ULA’s Delta IV Heavy was retired in 2024. Ariane 6, which was originally scheduled to launch ViaSat-3 F3 before development delays and the post-Ukraine reshuffling of launch manifest assignments moved the spacecraft to Falcon Heavy, offers an alternative for European and international customers — but it has struggled to achieve reliable launch cadence and its payload capacity to GTO falls below Falcon Heavy’s peak performance in expendable or partial-recovery configurations. Blue Origin’s New Glenn is operational but has experienced anomalies in early missions, limiting customer confidence. ULA’s Vulcan Centaur serves the national security market but does not offer the throw weight that Falcon Heavy provides.
This effectively means SpaceX holds a de facto monopoly on western heavy-lift launch services for the largest GEO satellites. That is not a comfortable position for an industry that values competitive tension to discipline pricing and incentivize innovation. Viasat, to its credit, originally sought Ariane 6 specifically to maintain European launch options and reduce dependence on SpaceX. The inability of European industry to deliver that alternative on schedule — a consequence of years of chronic underinvestment in European launch infrastructure and the disruption caused by Russia’s elimination from commercial launch markets after 2022 — left Viasat with no practical choice but to return to SpaceX.
The concentration of launch capability matters for industrial policy reasons as much as commercial ones. NASA’s decision to launch Europa Clipper on Falcon Heavy, saving an estimated $2 billion compared to the Space Launch System, was fiscally prudent but also highlighted how completely the U.S. government’s civil launch needs have become dependent on a single private company. When that company is also developing Starlink — a direct commercial competitor to satellite operators like Viasat — the dependency creates tensions that regulators and policymakers are only beginning to grapple with seriously.
Critical Perspectives: Concentration, Fragility, and the Starship Shadow
Any honest assessment of today’s launch must acknowledge the risks embedded in the picture it presents.
Market concentration is the most obvious concern. SpaceX’s dominance of the launch market — executing approximately half of all orbital launches worldwide in recent years, including virtually all U.S. commercial and government heavy lift — is without precedent in the space age. The company’s technical excellence is not in question. But technical excellence is not a sufficient safeguard against the risks that concentration creates: single points of failure in supply chain, the potential for pricing power to increase as competition diminishes, and the strategic complications that arise when a launch provider’s commercial interests are entangled with those of its customers. The European Space Agency and its member states have been reckoning with these consequences since Ariane 6 fell behind schedule; the U.S. government has been slower to act.
The ViaSat-3 F1 lesson is also worth carrying forward. A single antenna deployment anomaly on a satellite that cost hundreds of millions of dollars and several years to build reduced its throughput to a fraction of its designed capacity. For programs predicated on multi-terabit capacity, this kind of single-point failure can be financially devastating. The space insurance market absorbs some of this risk, but it cannot absorb the strategic cost of arriving at the GEO broadband market years late and at a fraction of expected capacity. The resilience of the ViaSat-3 program — its ability to absorb the F1 setback and continue toward F3 launch — reflects the financial depth that came with the Inmarsat acquisition. Smaller satellite operators would not survive an equivalent anomaly.
The Starship era represents a more fundamental disruption lurking behind today’s Falcon Heavy mission. SpaceX’s next-generation launch vehicle, still in flight testing, promises to carry payloads to low Earth orbit measured not in tens of metric tons but in hundreds — in a fully reusable configuration. When Starship reaches operational status, it will not merely compete with Falcon Heavy; it will displace it for most missions, while simultaneously enabling satellite constellation architectures of a scale and cost structure that will make today’s GEO programs look like the previous generation of space infrastructure — necessary, valuable, and eventually superseded.
The timing of ViaSat-3 F3 thus acquires a particular resonance. This spacecraft will likely remain in commercial operation for fifteen years or longer. By the time it retires from service in the early 2040s, the satellite broadband market will look almost unrecognizable compared to what we see today. The operators that survive will be those who have built the most flexible, multi-orbit, software-defined network architectures — and who have done so without betting so heavily on a single generation of hardware that they cannot pivot when the next generation arrives.
The Geopolitics of Coverage: Who Gets Connected, and Who Decides
Zoom out one more level, and the ViaSat-3 F3 launch carries implications that extend beyond corporate strategy into international relations and development economics.
The Asia-Pacific region is the world’s most economically dynamic. It is also the region with some of the most pronounced disparities in connectivity. The aviation market — Viasat’s primary immediate revenue target in the region — connects the affluent and the mobile. But the underlying capacity infrastructure that ViaSat-3 F3 provides will also serve maritime vessels, island communities, remote enterprise sites, and eventually, through service expansion, populations in some of the world’s most connectivity-starved areas.
This is not altruism on Viasat’s part; it is market expansion. But the geopolitical dimension is real. When U.S.-headquartered satellite operators extend high-throughput, high-reliability broadband coverage across the South China Sea, the Pacific Islands, and the maritime corridors of Southeast Asia, they are making infrastructure decisions that have strategic implications. The race between American and Chinese satellite operators for coverage of the Indo-Pacific region is not merely commercial — it is a contest over which country’s technical standards, legal frameworks, and network architectures become the default infrastructure for an economically and militarily critical region.
China’s own ambitions in this domain are serious and well-funded. China Satellite Network Group, the state-owned entity overseeing the Guowang LEO constellation, has filed for orbital slots that would place it in direct competition with Starlink and other western operators for limited spectrum resources. The completion of Viasat’s GEO coverage over the Asia-Pacific, combined with ongoing LEO buildout by U.S. operators, represents a concrete broadening of American-aligned connectivity infrastructure across a region where that presence matters.
Conclusion: The Weight of a Rare Launch
Eighteen months of quiet, and then: twenty-seven engines, 5.1 million pounds of thrust, a spectacular double booster landing, and a six-ton spacecraft on its way to geostationary orbit above the Pacific. There is something fitting about the rarity of Falcon Heavy’s flight pace. Each launch carries more weight — literal and figurative — than the routine. Each one lands in a market landscape that has shifted since the last, and must be interpreted against that shifting context.
Today’s mission completes what Viasat set out to build. Whether that completion arrives soon enough, at sufficient capacity, and at competitive enough terms to hold meaningful market share against the LEO operators is the question that will determine the company’s next decade. The honest answer is: probably, in some segments; probably not, in others. The in-flight connectivity and government markets will sustain meaningful GEO operators for the foreseeable future. The mass consumer broadband market — where Starlink and eventually Kuiper will compete on price and latency — is likely beyond recovery for GEO-only strategies.
But the more durable insight from watching Falcon Heavy lift off today is about the infrastructure of ambition. The rocket that launched a Tesla Roadster toward Mars for a demo flight in 2018 has, in twelve missions, launched classified military satellites, a spacecraft headed for Jupiter, weather observation platforms critical for hurricane forecasting, and now the final piece of the first commercially deployed global multi-terabit broadband constellation. It has done so at a fraction of what its predecessors cost, with a booster recovery system that turns what used to be expensive expendable stages into reusable assets.
That is the story the launch market keeps telling, in different configurations and with different payloads: that the economics of access to space have been permanently disrupted, that the disruption is still accelerating, and that the satellites we put up today will operate in a world the launch industry of a decade ago could not have anticipated. ViaSat-3 F3 will look down from 35,786 kilometers at a world connected in ways its designers planned for, and ways they did not. That is, perhaps, the most precise definition of infrastructure worth building.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
Analysis
San Francisco, AI Capital of the World, Is an Economic Laggard
Artificial intelligence is creating unprecedented wealth at unprecedented speed. Its heartland is not.
On a drizzly Tuesday morning in the Mission District, a billboard advertising a generative AI platform — “Think Faster. Build Smarter. Scale Infinitely.” — towers over a sidewalk encampment where a dozen tents have been a fixture since 2022. Two blocks south, a gleaming co-working space charges $900 a month for a hot desk. Two blocks north, the food bank queue stretches past a mural of César Chávez. This is San Francisco in the age of artificial intelligence: a city simultaneously at the vanguard of history and strangely marooned by it.
The numbers are, by any reckoning, staggering. OpenAI is now valued at $300 billion, a figure that exceeds the GDP of most sovereign nations. Anthropic, its chief rival and fellow San Francisco resident, has attracted a cumulative $12 billion-plus in investment from Amazon and Google alone. Together with Databricks, Scale AI, and more than 90 other Bay Area AI unicorns — firms valued privately at over $1 billion — the region now hosts what economists at the Federal Reserve Bank of San Francisco have described as the most concentrated accumulation of venture-backed artificial intelligence capital in modern economic history. The Bay Area accounts for well over 60 percent of all U.S. AI venture investment, a ratio that has tightened rather than loosened as the boom has matured.
And yet San Francisco, the city itself, is struggling. Not in the polite way that prosperous cities occasionally describe mild slowdowns, but in measurable, sometimes painful ways that resist easy dismissal. Its office vacancy rate has hovered near 35 percent — the highest of any major American city — even as AI firms sign glossy leases in South of Market. The San Francisco Controller’s Office has reported persistent year-over-year declines in sales tax revenues from commercial corridors including the Tenderloin, Civic Center, and parts of SoMa. Overall city payroll employment remains below its 2019 peak. The city’s unemployment rate, which reached 6.1 percent in early 2024, has normalized but remains structurally elevated by the standards of the surrounding Bay Area. A Bureau of Labor Statistics analysis of metropolitan employment trends shows San Francisco County adding technology jobs at a rate significantly slower than Austin, Seattle, and even smaller metros like Raleigh-Durham — cities that lack anything approaching San Francisco’s density of AI valuation.
The paradox is not a curiosity. It is, I would argue, one of the defining economic puzzles of our era, and its resolution has profound consequences for how policymakers, urban planners, and civic leaders worldwide think about the geography of innovation.
The Boom That Doesn’t Boom
To understand why the AI wealth explosion has not translated into broad San Francisco prosperity, it helps to contrast the current moment with earlier technology cycles. The dot-com era of the late 1990s was, economically speaking, a mess — but it was a democratically distributed mess. Web startups hired copywriters, office managers, receptionists, catering staff, and building contractors in droves. The city’s employment base swelled. Restaurants in SoMa ran three seatings on weeknights. The construction crane became the defining civic symbol. When the crash came in 2001, it wiped out paper fortunes but had generated real intermediate employment across a wide swath of the local economy.
The social media boom of the 2010s was more capital-efficient, but its infrastructure still required armies of content moderators, trust and safety reviewers, logistics workers, and a sprawling class of middle-income tech employees — product managers, UX researchers, data analysts — who bought homes in Bernal Heights and spent meaningfully in neighborhood economies. As FRBSF economists noted at the time, each technology job in the Bay Area generated approximately five additional local jobs through multiplier effects: the phenomenon economists call the “local multiplier.”
The AI boom is structurally different, and that difference is not accidental. Frontier AI development is, by design, extraordinarily capital-intensive and astonishingly labor-light relative to the valuations involved. OpenAI employs roughly 3,500 people globally — a workforce smaller than many mid-tier law firms — while commanding a valuation that exceeds ExxonMobil. Anthropic employs fewer than 1,000. The economics are not those of the dot-com era, with its profligate hiring; they are closer to those of the oil industry, where massive capital pools concentrate wealth among small technical elites and equity holders while the multiplier effects to broader communities remain stubbornly thin. “These are platform technologies, not employment technologies,” as one prominent Bay Area economist, who requested not to be named due to relationships with venture-backed firms, put it to me. “The value accrues to the equity table. The city’s tax base doesn’t feel it the same way.”
The K-Shaped City
The bifurcation this creates has given rise to what urban economists increasingly call the “K-shaped” San Francisco — a local variant of the macroeconomic phenomenon that gained currency during the pandemic’s uneven recovery. At the top of the K, AI founders, early employees with equity, and venture capitalists are accumulating wealth at rates with few peacetime precedents. Median home prices in Pacific Heights and Noe Valley have crossed $2.2 million, sustained not by broad middle-class demand but by a thin layer of extraordinary earners bidding aggressively against one another for a constrained housing stock. A three-bedroom in the Inner Sunset now draws multiple offers above $1.8 million, primarily from engineers with restricted stock units in companies most Americans have never heard of.
At the bottom of the K, conditions are considerably bleaker. San Francisco’s homeless population — estimated by the 2024 Point-in-Time Count at over 7,000 individuals unsheltered on any given night — has not declined meaningfully despite years of city expenditure exceeding $700 million annually on homelessness programs. The San Francisco Unified School District is cutting programs amid declining enrollment, as middle-class families — the teachers, nurses, civil servants, and small business owners who once comprised the city’s civic backbone — are displaced to Contra Costa County, Sacramento, or out of the state entirely. The Mission District, historically the city’s Latino working-class heart, has seen commercial vacancy rates rise and longtime restaurants shutter, replaced by AI-adjacent amenity businesses — cold-brew concept cafés, biohacking studios, prompt-engineering bootcamps — that cater to a narrow professional stratum.
This is not merely a humanitarian concern. It is an economic one. Cities function as ecosystems, and the systematic displacement of intermediate-income households corrodes civic infrastructure in ways that eventually undermine even the elite economy they house. When a Financial Times analysis of U.S. innovation hubs found that cities with the highest income inequality consistently show lower rates of long-run per capita GDP growth, San Francisco’s trajectory begins to look less like a triumph of creative destruction and more like a case study in what economists call “extractive urbanism.”
The Geography of the New Boom
There is a further wrinkle that standard economic analysis tends to understate: the AI boom is not happening in San Francisco in the way that previous cycles were. It is happening near San Francisco, in ways that direct economic activity away from the city proper.
OpenAI’s headquarters are in Mission District, yes — but its massive new data center investments are in Texas and Iowa, where land is cheap and power is abundant. Anthropic’s principal offices are in San Francisco, but its computational infrastructure runs on AWS servers in Northern Virginia. The physical apparatus of AI — the chips, the cooling systems, the high-voltage power grids — is deployed wherever real estate and regulatory conditions are most favorable, which is almost never an expensive American coastal city. NVIDIA, the company that has perhaps done more than any other to make the AI boom possible, is headquartered in Santa Clara. Its revenue — now exceeding $130 billion annually — flows to shareholders and employees distributed globally, with relatively modest footprint in San Francisco’s commercial property or retail tax base.
Meanwhile, within the Bay Area itself, the center of gravity of AI office activity has shifted from the downtown Financial District — where vacancy remains cavernous — toward specific corridors in SoMa, Mission Bay, and increasingly to the Peninsula cities of Palo Alto and Menlo Park. This is consequential because San Francisco’s tax structure is highly sensitive to downtown commercial activity. The city’s gross receipts and payroll taxes, which generate a substantial portion of the general fund, correlate strongly with downtown office utilization. A CBRE market report from early 2026 found that while AI firms account for the majority of new San Francisco office leases by square footage, average lease sizes are modest — reflecting smaller headcount per dollar of valuation than any previous technology cycle — and many are structured as flexible or short-term arrangements that generate lower assessed values.
The Talent Paradox
The AI boom has also introduced a talent paradox that complicates simplistic narratives about technology creating broadly-shared prosperity. AI frontier labs do not hire broadly — they hire extraordinarily selectively. The competition for PhD-level machine learning researchers has driven starting compensation packages — salary, signing bonus, and equity — to levels that can exceed $1 million annually at OpenAI and Anthropic. These are not the figures of a democratized labor market. They represent the concentration of enormous economic rents into an extremely small professional cohort, most of whom were educated at a handful of elite universities and many of whom are not originally from San Francisco or even the United States.
For local workers without specialized AI credentials, the labor market effects are mixed at best and negative at worst. Research from the Brookings Institution suggests that AI automation is already displacing routine cognitive tasks in the Bay Area — in law, in finance, in customer service — faster than new AI-specific employment is being created for non-specialist workers. A legal secretary in a San Francisco firm, a junior financial analyst at a wealth management boutique, a graphic designer at a marketing agency: these roles are being restructured or eliminated at a pace that the AI boom’s most enthusiastic advocates rarely acknowledge. The net employment effect locally may be, for now, close to zero for workers without advanced technical qualifications — and negative in some sectors.
Policy Implications and the Risk of Imitation
San Francisco’s predicament carries urgent implications for the dozens of cities and regional governments worldwide that are racing to position themselves as “AI hubs” — from London’s Silicon Roundabout to Seoul’s Digital Innovation District, from Dubai’s AI Quarter to Paris’s Station F. The implicit logic of these initiatives is that concentrating AI capital and talent generates broad local prosperity. San Francisco’s experience suggests the causality is considerably weaker than assumed.
What might more inclusive AI urbanism look like? Several interventions merit serious consideration. First, taxation structures designed for an earlier technology era may be poorly calibrated for AI economics. A gross receipts tax that applies equally to a labor-intensive restaurant and a capital-intensive AI lab captures very different slices of economic activity. Policymakers in San Francisco — and elsewhere — should explore mechanisms that capture a larger share of the capital gains and equity appreciation generated by AI firms, rather than relying primarily on payroll and commercial activity taxes that AI firms generate only modestly.
Second, housing supply is not a peripheral concern. The bifurcated real estate market that AI wealth is intensifying actively destroys the intermediate-income households whose presence makes a city function. Serious upzoning — not the incrementalist versions that California has periodically attempted — combined with mandatory inclusionary requirements calibrated to actual construction costs, is an economic necessity, not merely a social preference.
Third, there is a role for proactive investment in AI-adjacent skills among existing residents. The notion that AI’s benefits will trickle down automatically is not supported by San Francisco’s data. Active reskilling programs, community college partnerships with AI firms, and apprenticeship models — of the kind that Germany’s Fraunhofer Institutes have pioneered for industrial technology — represent a more deliberate approach to inclusive AI growth.
The Longer View
It would be premature to conclude that San Francisco’s current economic weakness is permanent. Technology cycles are long, and second-order effects take time to materialize. The dot-com crash of 2001 looked, in the moment, like an economic catastrophe from which the city might never recover. A decade later, the mobile and social media boom had transformed San Francisco into one of the most dynamic urban economies in the world.
It is possible — perhaps even probable — that AI will eventually generate broader employment effects as the technology matures, as AI-native businesses proliferate beyond the frontier labs, and as demand for AI-enabled products and services creates new categories of work that are difficult to foresee today. Historians of technology, from Joel Mokyr to David Autor, have consistently found that transformative technologies ultimately create more employment than they destroy, even if the transition imposes severe distributional costs.
But the transition is the point. San Francisco is living through the transition right now, and its current management of that transition — the housing dysfunction, the displacement of intermediate-income households, the failure of AI wealth to flow through the city’s fiscal architecture — will determine whether the city emerges from this moment as a model or a cautionary tale.
The AI billboard in the Mission District promises to think faster, build smarter, scale infinitely. Below it, a man in a faded blue sleeping bag stirs as the morning fog burns off the Bay. San Francisco has always been a city of extraordinary distances between aspiration and reality. The AI boom has simply made those distances more visible, and the urgency of closing them more acute.
The world is watching. San Francisco, for its own sake and for the sake of every city that hopes to follow its model, would do well to notice.
Discover more from The Economy
Subscribe to get the latest posts sent to your email.
-
Markets & Finance4 months agoTop 15 Stocks for Investment in 2026 in PSX: Your Complete Guide to Pakistan’s Best Investment Opportunities
-
Analysis3 months agoBrazil’s Rare Earth Race: US, EU, and China Compete for Critical Minerals as Tensions Rise
-
Analysis2 months agoTop 10 Stocks for Investment in PSX for Quick Returns in 2026
-
Investment4 months agoTop 10 Mutual Fund Managers in Pakistan for Investment in 2026: A Comprehensive Guide for Optimal Returns
-
Banks3 months agoBest Investments in Pakistan 2026: Top 10 Low-Price Shares and Long-Term Picks for the PSX
-
Global Economy4 months agoPakistan’s Export Goldmine: 10 Game-Changing Markets Where Pakistani Businesses Are Winning Big in 2025
-
Global Economy4 months ago15 Most Lucrative Sectors for Investment in Pakistan: A 2025 Data-Driven Analysis
-
Asia4 months agoChina’s 50% Domestic Equipment Rule: The Semiconductor Mandate Reshaping Global Tech
