Connect with us

Analysis

Walmart’s New Streaming Stick Is the Quiet Disruption Big Tech Didn’t See Coming

Published

on

The Onn 4K Streaming Stick doesn’t arrive with fanfare. It doesn’t need it.

There were no press invites. No breathless product launches livestreamed to a million viewers. No carefully rehearsed executives in black turtlenecks. Sometime in early April 2026, a Reddit user in Texas walked into their local Walmart, spotted a compact HDMI dongle on the shelf — the Onn 4K Streaming Device — and bought it for roughly $30. Within days, the post had gone viral in streaming enthusiast circles. By week two, benchmark sites had torn it apart. By week three, analysts were quietly asking a question that felt almost impertinent: Has Walmart just upended the streaming hardware market without saying a single word about it?

The answer, this columnist argues, is essentially yes — and the implications run deeper than silicon and software.

The Walmart new streaming stick is not a toy. It is not a charity product or a loss leader dressed in plastic. It is, beneath its understated exterior, a pointed statement about who owns the future of home entertainment, how accessible that future should be, and whether Silicon Valley’s approach to streaming hardware — iterative, incremental, and increasingly expensive — is starting to run out of road.

The Spec Sheet That Should Make Roku Nervous

Let’s begin with the basics, because the basics are where this story gets interesting.

The Onn 4K Streaming Device (2026) — Walmart’s first-ever 4K streaming stick, as opposed to its existing set-top boxes — runs Google TV, supports 4K Ultra HD resolution, decodes AV1, delivers Dolby Atmos audio, and ships with a voice remote that puts Google’s Gemini assistant at the tip of your tongue. Under the hood, it is powered by a Realtek RTD1325 processor with a quad-core 1.7 GHz ARM Cortex-A55 CPU and an ARM Mali-G57 GPU, paired with 2GB of RAM and 8GB of storage. Connectivity is handled via dual-band Wi-Fi 5 and Bluetooth 5.2. Power and accessories run through a single USB-C port — a welcome upgrade from the Micro-USB common on budget devices of a generation ago.

The price? Approximately $19.88 to $30, depending on store location and timing.

Compare that to its nearest competitors. The Amazon Fire TV Stick 4K Plus retails at roughly $50 and, in benchmark testing conducted by AFTVNews, outperforms the Onn 4K Stick by approximately 15 percent in raw processing power. The Roku Streaming Stick 4K sits at a similar price tier. And Google’s own Chromecast successor, the Google TV Streamer, costs $79.99 — a device that the newer, pricier Onn 4K Pro (2026) reportedly bests in benchmark performance at two-thirds the price.

The Onn 4K Stick, to be precise, is not the fastest device on the market. It trades raw horsepower for something arguably more valuable in 2026: radical affordability at 4K capability. For tens of millions of households who want to upgrade an aging 4K television without committing to a $50–$80 streaming device, this stick represents a genuinely new entry point.

The Unremarkable Launch That Says Everything

The way Walmart launched — or rather, didn’t launch — the Onn 4K Streaming Stick is itself a lesson in retail philosophy.

There was no announcement. No coordinated press push. Units simply appeared in select stores, were purchased by curious early adopters, photographed, shared on Reddit and YouTube, stress-tested by enthusiast communities, and covered by tech outlets weeks before Walmart acknowledged the product’s existence online. As of late April 2026, the company’s website listings for the device have only recently gone live for most users, and a formal launch is still pending in many markets.

This is not an accident. Walmart has a documented pattern of soft-launching Onn devices — the 4K Plus, the previous 4K Pro — in exactly this manner. But the effect goes beyond mere supply chain staggering. What Walmart achieves through this approach is something more valuable in the attention economy: organic credibility. When a product is found rather than marketed to you, when enthusiasts dissect it of their own volition, when the first reviews come from real buyers rather than brand ambassadors, the resulting coverage is qualitatively different. It reads as discovery. It feels like truth.

For a company that has struggled — as all major retailers have — to position itself as a technology innovator rather than a discount warehouse, that credibility matters enormously.

The Real Competition: Not Amazon or Roku, But the Cost of Streaming Itself

Here is the context that most reviews of the Onn 4K Stick have missed, buried as they are in chipset comparisons and frame-rate analyses.

The average American household now pays more than $100 per month in combined streaming subscriptions. Between Netflix, Disney+, Max, Peacock, Paramount+, Apple TV+, and the array of sports streaming services that have migrated from traditional cable — the economics of cord-cutting no longer deliver the savings they once promised. The great unbundling of cable television, celebrated as a consumer liberation a decade ago, has quietly re-bundled itself at roughly the same price, minus the sports and local news that many viewers actually want.

In this context, hardware costs matter more than they used to. When you are already paying $120 a month in subscriptions, the difference between a $30 streaming stick and an $80 one isn’t trivial. It’s three weeks of a streaming service. It’s a family dinner. It’s the kind of money that is genuinely meaningful to the median American household — whose real income has grown modestly while its entertainment bill has expanded considerably.

Walmart understands this arithmetic better than almost any other technology distributor on earth. Its core customer — middle-income, value-conscious, deeply embedded in the service’s ecosystem through Walmart+ — is precisely the person for whom a $30 4K streaming stick isn’t a compromise. It’s the right choice.

This is why the Onn 4K Streaming Device should not be read as a product primarily competing with the Fire TV Stick or Roku. It is, at a deeper level, competing with the psychological friction of streaming itself — the sense that premium home entertainment requires ongoing premium investment. It argues, in silicon and software, that it doesn’t.

Google TV’s Unlikely Beneficiary

There is a secondary story here, equally significant, about the fate of Google TV as a platform.

Google’s own streaming hardware ambitions have had a complicated decade. The original Chromecast redefined how people thought about wireless media casting. The Chromecast with Google TV 4K, launched in 2020, was a genuine breakthrough. But subsequent iterations have been incremental, overpriced relative to their performance, and undermined by the quiet sidelining of the Chromecast brand itself — which Google has, for all practical purposes, discontinued as a named product line.

Into this vacuum have stepped third-party manufacturers running Google TV. And of those manufacturers, Walmart’s Onn brand has become, arguably, the most consequential champion of the platform in the United States. The new Onn 4K Stick ships with Gemini pre-installed as the default AI assistant — positioning Google’s latest AI offering not on a Google-branded device, but on a $30 Walmart dongle. The irony is sharp, and entirely intentional on Google’s part: they need distribution, and Walmart provides it at a scale no tech company can match organically.

Google TV now reaches more homes through Onn than through its own hardware. That is a remarkable state of affairs, and it speaks to the fundamental restructuring of the streaming platform wars — where the battle is no longer primarily about hardware design but about operating system reach and data access.

For Google, every Onn device activated is a Google account signed in, a voice search conducted, a YouTube Premium promotion delivered, a Google Play purchase made. The economics of platform distribution have never been clearer: it is better to be the operating system on a $30 device in 50 million homes than the premium hardware in 5 million living rooms.

What the Onn 4K Stick Does Well — and Where It Falls Short

Balanced analysis demands honesty. The Onn 4K Streaming Device has real strengths, but also real limitations worth examining carefully before purchase.

Strengths:

  • Price-to-feature ratio: At $30, the combination of 4K output, Dolby Atmos, AV1 decoding, Google TV, and Gemini assistant is genuinely difficult to match in the market.
  • Google TV ecosystem: Access to the Google Play Store, 700,000+ movies and shows, 10,000+ apps, and 1,700+ free live TV channels — all unified under Google TV’s content-aggregation interface — represents a vast and well-maintained ecosystem.
  • USB-C power: The upgrade from Micro-USB is functionally significant; USB-C is universal, durable, and future-proof at this price point.
  • Gemini integration: AI-powered search and discovery on a budget device is a meaningful differentiator as voice control becomes increasingly central to how viewers navigate fragmented content libraries.
  • AV1 decoding: Support for this next-generation codec, used by YouTube, Netflix, and others for superior compression efficiency, suggests the device is built with at least some longevity in mind.

Weaknesses and Caveats:

  • Benchmark performance gap: As AFTVNews benchmarking confirms, the Onn 4K Stick trails the Fire TV Stick 4K Plus by approximately 15 percent in raw processing power, and the Xiaomi TV Stick 4K by around 27 percent. For casual viewers, this gap will be invisible. For those who run multiple apps simultaneously or demand instantaneous UI response, it may be perceptible.
  • No Dolby Vision: Unlike the Onn 4K Pro, the stick variant does not appear to support Dolby Vision HDR — a meaningful omission for viewers with Dolby Vision-capable televisions who wish to see colour at its most accurate.
  • Limited storage: 8GB is functional but not generous. Aggressive app installers will feel the constraint.
  • Build quality unknowns: Walmart has not publicized third-party quality certification data, and early user reports — while generally positive — come from a limited sample. Long-term durability remains an open question.
  • Software update longevity: This is, for this analyst, the most significant unknown. Budget devices from retail brands have a mixed history of OS support. Whether Walmart commits to multi-year Android security patches and Google TV updates for the Onn 4K Stick will determine its value proposition considerably.

A Comparison Worth Making

DevicePrice (approx.)ResolutionDolby VisionDolby AtmosRAMStoragePlatform
Onn 4K Streaming Stick (2026)~$304K UHD2GB8GBGoogle TV
Amazon Fire TV Stick 4K Plus~$504K UHD2GB8GBFire OS
Roku Streaming Stick 4K~$504K UHDRoku OS
Google TV Streamer~$804K UHD4GB32GBGoogle TV
Onn 4K Pro (2026)~$604K UHD3GB32GBGoogle TV

The table is instructive. At $30, the Onn 4K Stick competes meaningfully — even if not identically — with devices costing significantly more. For first-time 4K upgraders, secondary television rooms, student apartments, or households prioritizing subscription costs over hardware investment, the calculus tilts clearly in Onn’s favour.

The Walmart Advantage: Distribution as Strategy

There is a dimension to this story that is almost never discussed in gadget-focused coverage: the strategic significance of Walmart’s physical retail footprint.

Walmart operates approximately 4,600 stores in the United States. It reaches more American communities — including rural towns where broadband infrastructure and consumer electronics options are limited — than any other retailer on earth. When Walmart puts the Onn 4K Stick on its shelves, it doesn’t just sell a product. It introduces the possibility of 4K streaming to communities that may have no Best Buy, no Target with a substantial electronics section, and whose residents may not routinely shop technology on Amazon.

This is the dimension that gives the Walmart new streaming stick genuine cultural significance. In an era when the digital divide — between households with rich, full-spectrum media access and those without — remains a live and serious challenge, a $30 4K streaming device distributed through 4,600 stores is not merely a consumer product. It is infrastructure, of a kind. Not perfect infrastructure, not a complete solution to the access problem, but a meaningful step in the direction of equalization.

Entertainment, particularly in times of economic stress, functions as more than leisure. It is social cohesion. It is cultural participation. It is, in households with children, an educational resource. The democratization of access to it — even imperfectly, even with caveats — matters in ways that benchmark scores cannot quantify.

The Broader Reckoning for Streaming Hardware

The Onn 4K Stick’s emergence coincides with what appears to be a genuine inflection point in the streaming hardware market.

Amazon’s Fire TV has slowly drifted away from Android in favour of its proprietary Fire OS — a decision that has constrained sideloading capabilities and made the platform more walled than it was in its earlier, more open years. Roku, for all its interface elegance, operates a closed ecosystem with limited customization. Google’s own hardware ambitions, as noted, have stalled. Apple TV 4K remains premium, powerful, and priced accordingly for a market segment that is not expanding.

Into this landscape comes an open, Google TV-powered device, sold through the world’s largest retailer, at a price point that functionally removes cost as a barrier to 4K streaming adoption. That is a meaningful competitive event — not merely a product launch.

The incumbents are not blind to this. Amazon’s Fire TV team will have seen the benchmark numbers. Roku’s strategists will have noted the price. But the structural advantage Walmart possesses — its supply chain, its store network, its customer relationships, and its willingness to use hardware as a tool of ecosystem building rather than a profit centre in itself — is not easily replicated by companies whose hardware divisions are expected to be standalone businesses.

The Question No One Is Asking Yet

As this columnist writes, the Onn 4K Streaming Stick is still making its way to store shelves nationwide, its official launch yet to be formally announced. In a few weeks, it will be reviewed comprehensively, benchmarked exhaustively, and discussed at length on every major technology platform.

Most of that coverage will focus on the right questions: Is the picture quality good? Does the remote feel cheap? Will it handle Netflix 4K without buffering?

But the question worth sitting with — the one that this particular product, at this particular moment, forces into view — is a different one entirely.

What does it mean when the most consequential advancement in the democratization of premium streaming comes not from a Silicon Valley lab or a Big Tech product event, but from the electronics shelf of a big-box retailer, launched without a press release, discovered by a Reddit user in Texas?

It means, perhaps, that the future of accessible technology has always been less about innovation and more about distribution. Less about the bleeding edge and more about the trailing hundreds of millions. Less about who can make the most sophisticated device and more about who can make a good-enough device available to everyone, everywhere, at a price that asks nothing of them beyond showing up.

Walmart has been doing that for seventy years. The Onn 4K Streaming Stick is simply the latest, most quietly radical expression of it.

The streaming wars, it turns out, may not be won by the company with the best algorithm or the most exclusive content. They may be won by the company with the most parking spaces.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Analysis

The Giant Stirs Again: How Falcon Heavy’s Return and the ViaSat-3 Constellation Signal a New Chapter in the Satellite Broadband Wars

Published

on

SpaceX’s Falcon Heavy returns to flight on April 27, 2026, launching the ViaSat-3 F3 Asia-Pacific satellite from LC-39A. Only its 12th mission in history, this rare flight completes Viasat’s global broadband constellation and reshapes the GEO vs. LEO satellite broadband competition. Here’s what it means for the new space economy.

At 10:21 a.m. Eastern Time on Monday, April 27, 2026, the most powerful operational commercial rocket on Earth — and one of its rarest fliers — ignites its twenty-seven Merlin engines simultaneously at Kennedy Space Center’s storied Launch Complex 39A. The ground shakes the way the ground is supposed to shake near a rocket: not from a single source, but from a column of fire wide enough to seem geological, to seem geological. Falcon Heavy’s triple-core frame, generating more than 5.1 million pounds of thrust, clears the tower in a wall of sound. Then, minutes later, comes the signature spectacle — two side boosters separating and wheeling back toward Cape Canaveral in precise, mirror-image arcs, landing on Landing Zone 2 and Landing Zone 40 with the kind of choreography that still, somehow, feels impossible. The central core flies on, burns everything it has left, and falls into the Atlantic. Its sacrifice is the price of orbiting a six-metric-ton satellite to geostationary transfer orbit.

This is Falcon Heavy’s twelfth flight in its eight-year operational life. Twelve. The number is almost deliberately understated for a vehicle of this capability. And that rarity — the extended eighteen-month hiatus since its previous mission, NASA’s Europa Clipper in October 2024 — is itself a story worth telling, because it reveals as much about where the commercial space economy is heading as the launch it frames.

A Rocket Reserved for Giants

Understanding why Falcon Heavy flies so seldom requires understanding what it is and what it isn’t. Falcon Heavy is not SpaceX’s everyday workhorse; that role belongs to Falcon 9, which has become perhaps the most routinely astonishing piece of engineering in contemporary aviation history, completing an extraordinary 165 launches in 2025 alone. Falcon Heavy is something else: a vehicle summoned for missions too massive, too energetic, or too classified for a standard Falcon 9 to handle. It is the draft horse you bring out when the load demands it and put back in the barn when ordinary work resumes.

At a listed price of approximately $97 million per launch in its reusable configuration — and roughly $150 million in fully expendable form — Falcon Heavy is already a relative bargain compared to the now-retired Delta IV Heavy, which cost ULA customers between $350 and $400 million per flight. But the market for truly heavy payloads simply isn’t large enough to sustain monthly cadence, and SpaceX has never pretended otherwise. The vehicle was designed for a specific tier of mission: very large commercial communications satellites, deep-space science flagships too heavy for a single Falcon 9, and high-orbit national security payloads demanding maximum throw weight. When those missions come, Falcon Heavy flies. When they don’t, it waits.

What brings it back today is the final satellite of Viasat’s ambitious ViaSat-3 program: the ViaSat-3 F3 spacecraft, destined for the Asia-Pacific region, built by Boeing, and configured with a Ka-band payload designed to add more than one terabit per second of broadband capacity to Viasat’s global network. At approximately 6.6 metric tons, ViaSat-3 F3 is too heavy for a Falcon 9 to lift to the transfer orbit Viasat needs — particularly one favorable enough for the satellite’s electric propulsion to complete the journey to geostationary orbit on a reasonable timeline. As confirmed by Viasat’s own leadership, Falcon Heavy’s superior performance means the spacecraft can be delivered to an orbit just below geostationary apogee with only about three degrees of inclination — cutting weeks off the months-long electric orbit-raising process compared to what an Atlas V delivery required for ViaSat-3 F2.

The Mission in Detail: Engineering a Global Network

The technical architecture of this mission rewards attention, because it illustrates exactly why some satellite programs still require the big rocket rather than the commercially expedient one.

ViaSat-3 F3 will be deployed to geosynchronous transfer orbit — an elliptical orbit with a perigee in the low tens of thousands of kilometers and an apogee near geostationary altitude — approximately five hours after liftoff from LC-39A. From there, the spacecraft’s all-electric propulsion system takes over, gradually raising and circularizing the orbit over the course of roughly two months until ViaSat-3 F3 arrives at its reserved slot at 158.55 degrees East longitude, directly above the Pacific Ocean at geostationary altitude of 35,786 kilometers. Once in position, Viasat expects rigorous bus and payload testing before a commercial service entry expected by late summer 2026.

The satellite itself is a remarkable piece of engineering: a fully flexible Ka-band broadband spacecraft designed to direct its capacity dynamically, rather than assigning fixed amounts of spectrum and power to fixed geographic beams as earlier generations of GEO satellites did. In the words of Viasat’s vice president of space systems, Dave Abrahamian, the constellation’s hallmarks are “a huge amount of absolute capacity, but also the flexibility to put it wherever you need it, whenever you need it.” Traditional satellites — including Viasat’s own earlier generations — operate more like fixed highway lanes: once built, the bandwidth goes where the beams point, regardless of where demand actually flows on any given day. ViaSat-3 F3 is architected to be more like a managed network, allocating spectrum and power dynamically in response to real-time demand.

This flexibility matters enormously for the commercial aviation market, which constitutes one of Viasat’s primary revenue streams. Airline routes shift seasonally and commercially. Demand spikes during peak travel periods and across high-traffic corridors. A satellite that can concentrate capacity over the North Pacific during the morning push and redistribute it over Southeast Asian leisure routes in the afternoon represents a fundamentally different commercial proposition than one locked into static beam patterns.

For the booster side of the mission, SpaceX will fly side boosters B1072 and B1075 back to Cape Canaveral Space Force Station, landing at LZ-2 and the recently commissioned LZ-40 respectively. B1075 carries a flight heritage that includes SDA orbital transport missions, multiple Starlink deployments, and an international synthetic aperture radar spacecraft. Their recovery is not merely theater — it is the economic logic underlying SpaceX’s cost model, allowing the amortized cost of booster manufacturing to be spread across multiple flights. The central core, carrying nothing but a nearly empty propellant load by the time it has done its work, will be expended — a trade-off SpaceX has consistently made on GTO missions demanding maximum performance from the vehicle’s core stage.

Completing the Constellation: What ViaSat-3 F3 Means for Viasat

The ViaSat-3 program has not had an easy journey. When ViaSat-3 F1 arrived in orbit in May 2023, engineers discovered an antenna deployment anomaly that severely constrained the satellite’s throughput — reducing it to an estimated 5 to 10 percent of its intended capacity. For a company that had bet heavily on this generation of satellites to compete against the rising LEO constellations, the setback was consequential. Customers noticed. Starlink, with its terrestrially-derived latency characteristics and rapidly growing coverage, captured aviation connectivity contracts that Viasat had hoped to retain.

The setback also complicated Viasat’s financial position at a moment when the company was simultaneously integrating its transformative 2023 acquisition of Inmarsat — a deal that expanded the company’s maritime and government connectivity business dramatically but also loaded the balance sheet. ViaSat-3 F2, the second spacecraft in the constellation targeting the Americas and EMEA regions, flew on a ULA Atlas V and has been progressing through in-orbit testing, with its reflector deployment now completing after challenges posed by the spring eclipse season. As Viasat’s latest confirmation notes, F2’s final deployments are expected to complete over the coming weeks — meaning the company is, finally, beginning to see its multi-year, multi-billion-dollar satellite program deliver on its intended architecture.

ViaSat-3 F3 completing the constellation closes a strategic gap that has left Viasat without full global high-throughput coverage since the program began. The Asia-Pacific region — home to some of the world’s busiest aviation corridors, fastest-growing maritime trade routes, and largest underserved broadband markets — has been waiting for this capacity. As Abrahamian told Spaceflight Now, “We have a number of airline customers in the APAC region that are really anxious to get this capacity online so they can start serving their customers better.” When F3 enters service, the ViaSat-3 constellation will represent a genuinely global, high-capacity, dynamically flexible broadband network — something no single competitor can claim across every orbit regime.

The Broadband Wars: GEO Renaissance or Rearguard Action?

Here is where the analysis must become honest about the headwinds rather than merely celebrating the engineering achievement.

Viasat’s strategic context is brutal. Starlink has grown to more than two million subscribers, and its low-Earth orbit architecture delivers latency characteristics — typically below 40 milliseconds — that geostationary satellites, orbiting at altitudes 60 times higher, cannot physically replicate. The laws of physics impose a minimum round-trip delay of roughly 550 milliseconds on GEO communications; for most broadband applications this is acceptable, but for latency-sensitive traffic including video conferencing, interactive gaming, and real-time financial transactions, it represents a structural disadvantage no amount of throughput can fully compensate.

Amazon’s Project Kuiper presents a different competitive threat: well-capitalized, backed by Amazon Web Services infrastructure, and designed from the outset for the enterprise and consumer markets where Viasat has historically been strongest. Kuiper has struggled with deployment pace — the program had launched only 78 satellites by mid-2025, far behind the FCC’s schedule — but Amazon’s financial resources and strategic motivation to protect its cloud business by owning connectivity infrastructure represent a long-term competitive pressure that will not diminish.

And yet. It would be a mistake to write GEO satellites out of the connectivity story, for several reasons that the ViaSat-3 program crystallizes.

First, coverage economics. A single geostationary satellite at 35,786 kilometers altitude covers roughly one-third of the Earth’s surface. A LEO constellation providing equivalent global coverage requires hundreds to thousands of individual spacecraft, each with a design life measured in years rather than decades. The capital efficiency of GEO for serving large geographic areas — particularly over oceans and sparsely populated territories where ground infrastructure is limited — remains compelling. ViaSat-3 F3’s coverage of the Asia-Pacific region, from a single orbital position, encompasses an area that would require a significant fraction of a LEO constellation to replicate.

Second, the defense and government market. Viasat has historically derived substantial and growing revenue from U.S. and allied government customers who value the satellite’s dedicated capacity, security architecture, and the ability to integrate with existing military communication networks. ViaSat-3 F3 explicitly introduces “new forms of resilience for US and international government customers,” per Viasat’s official launch confirmation. The national security satellite broadband market values characteristics — including resistance to jamming, controlled access, and sovereign oversight — that a commercially operated LEO megaconstellation does not automatically provide.

Third, the multi-orbit future. The most sophisticated satellite operators today are not choosing between GEO and LEO. They are building hybrid architectures that leverage the throughput and geographic efficiency of GEO alongside the latency characteristics of LEO, using intelligent ground terminals and network management to route traffic dynamically. Viasat’s own NexusWave service integrates its GEO capacity with OneWeb’s LEO network for maritime customers. The ViaSat-3 constellation, as it reaches full operational capability, becomes a cornerstone of this hybrid strategy rather than a standalone product competing head-to-head against Starlink on latency.

The Economics of Reusability and the Launch Market’s Quiet Monopoly

Step back from the satellite payload for a moment and consider the launch vehicle. Falcon Heavy’s twelfth flight in eight years is, by any conventional measure, an extremely low flight rate for a rocket of this capability. Yet SpaceX has maintained a 100 percent mission success rate across all twelve flights, and the booster recovery on dual RTLS missions has become so routine that it barely registers as remarkable. This combination — extreme reliability at very low cadence — reflects a deliberate commercial strategy that deserves scrutiny.

There is, in practical terms, no alternative to Falcon Heavy in the current market for very large GEO satellites requiring maximum performance to orbit. ULA’s Delta IV Heavy was retired in 2024. Ariane 6, which was originally scheduled to launch ViaSat-3 F3 before development delays and the post-Ukraine reshuffling of launch manifest assignments moved the spacecraft to Falcon Heavy, offers an alternative for European and international customers — but it has struggled to achieve reliable launch cadence and its payload capacity to GTO falls below Falcon Heavy’s peak performance in expendable or partial-recovery configurations. Blue Origin’s New Glenn is operational but has experienced anomalies in early missions, limiting customer confidence. ULA’s Vulcan Centaur serves the national security market but does not offer the throw weight that Falcon Heavy provides.

This effectively means SpaceX holds a de facto monopoly on western heavy-lift launch services for the largest GEO satellites. That is not a comfortable position for an industry that values competitive tension to discipline pricing and incentivize innovation. Viasat, to its credit, originally sought Ariane 6 specifically to maintain European launch options and reduce dependence on SpaceX. The inability of European industry to deliver that alternative on schedule — a consequence of years of chronic underinvestment in European launch infrastructure and the disruption caused by Russia’s elimination from commercial launch markets after 2022 — left Viasat with no practical choice but to return to SpaceX.

The concentration of launch capability matters for industrial policy reasons as much as commercial ones. NASA’s decision to launch Europa Clipper on Falcon Heavy, saving an estimated $2 billion compared to the Space Launch System, was fiscally prudent but also highlighted how completely the U.S. government’s civil launch needs have become dependent on a single private company. When that company is also developing Starlink — a direct commercial competitor to satellite operators like Viasat — the dependency creates tensions that regulators and policymakers are only beginning to grapple with seriously.

Critical Perspectives: Concentration, Fragility, and the Starship Shadow

Any honest assessment of today’s launch must acknowledge the risks embedded in the picture it presents.

Market concentration is the most obvious concern. SpaceX’s dominance of the launch market — executing approximately half of all orbital launches worldwide in recent years, including virtually all U.S. commercial and government heavy lift — is without precedent in the space age. The company’s technical excellence is not in question. But technical excellence is not a sufficient safeguard against the risks that concentration creates: single points of failure in supply chain, the potential for pricing power to increase as competition diminishes, and the strategic complications that arise when a launch provider’s commercial interests are entangled with those of its customers. The European Space Agency and its member states have been reckoning with these consequences since Ariane 6 fell behind schedule; the U.S. government has been slower to act.

The ViaSat-3 F1 lesson is also worth carrying forward. A single antenna deployment anomaly on a satellite that cost hundreds of millions of dollars and several years to build reduced its throughput to a fraction of its designed capacity. For programs predicated on multi-terabit capacity, this kind of single-point failure can be financially devastating. The space insurance market absorbs some of this risk, but it cannot absorb the strategic cost of arriving at the GEO broadband market years late and at a fraction of expected capacity. The resilience of the ViaSat-3 program — its ability to absorb the F1 setback and continue toward F3 launch — reflects the financial depth that came with the Inmarsat acquisition. Smaller satellite operators would not survive an equivalent anomaly.

The Starship era represents a more fundamental disruption lurking behind today’s Falcon Heavy mission. SpaceX’s next-generation launch vehicle, still in flight testing, promises to carry payloads to low Earth orbit measured not in tens of metric tons but in hundreds — in a fully reusable configuration. When Starship reaches operational status, it will not merely compete with Falcon Heavy; it will displace it for most missions, while simultaneously enabling satellite constellation architectures of a scale and cost structure that will make today’s GEO programs look like the previous generation of space infrastructure — necessary, valuable, and eventually superseded.

The timing of ViaSat-3 F3 thus acquires a particular resonance. This spacecraft will likely remain in commercial operation for fifteen years or longer. By the time it retires from service in the early 2040s, the satellite broadband market will look almost unrecognizable compared to what we see today. The operators that survive will be those who have built the most flexible, multi-orbit, software-defined network architectures — and who have done so without betting so heavily on a single generation of hardware that they cannot pivot when the next generation arrives.

The Geopolitics of Coverage: Who Gets Connected, and Who Decides

Zoom out one more level, and the ViaSat-3 F3 launch carries implications that extend beyond corporate strategy into international relations and development economics.

The Asia-Pacific region is the world’s most economically dynamic. It is also the region with some of the most pronounced disparities in connectivity. The aviation market — Viasat’s primary immediate revenue target in the region — connects the affluent and the mobile. But the underlying capacity infrastructure that ViaSat-3 F3 provides will also serve maritime vessels, island communities, remote enterprise sites, and eventually, through service expansion, populations in some of the world’s most connectivity-starved areas.

This is not altruism on Viasat’s part; it is market expansion. But the geopolitical dimension is real. When U.S.-headquartered satellite operators extend high-throughput, high-reliability broadband coverage across the South China Sea, the Pacific Islands, and the maritime corridors of Southeast Asia, they are making infrastructure decisions that have strategic implications. The race between American and Chinese satellite operators for coverage of the Indo-Pacific region is not merely commercial — it is a contest over which country’s technical standards, legal frameworks, and network architectures become the default infrastructure for an economically and militarily critical region.

China’s own ambitions in this domain are serious and well-funded. China Satellite Network Group, the state-owned entity overseeing the Guowang LEO constellation, has filed for orbital slots that would place it in direct competition with Starlink and other western operators for limited spectrum resources. The completion of Viasat’s GEO coverage over the Asia-Pacific, combined with ongoing LEO buildout by U.S. operators, represents a concrete broadening of American-aligned connectivity infrastructure across a region where that presence matters.

Conclusion: The Weight of a Rare Launch

Eighteen months of quiet, and then: twenty-seven engines, 5.1 million pounds of thrust, a spectacular double booster landing, and a six-ton spacecraft on its way to geostationary orbit above the Pacific. There is something fitting about the rarity of Falcon Heavy’s flight pace. Each launch carries more weight — literal and figurative — than the routine. Each one lands in a market landscape that has shifted since the last, and must be interpreted against that shifting context.

Today’s mission completes what Viasat set out to build. Whether that completion arrives soon enough, at sufficient capacity, and at competitive enough terms to hold meaningful market share against the LEO operators is the question that will determine the company’s next decade. The honest answer is: probably, in some segments; probably not, in others. The in-flight connectivity and government markets will sustain meaningful GEO operators for the foreseeable future. The mass consumer broadband market — where Starlink and eventually Kuiper will compete on price and latency — is likely beyond recovery for GEO-only strategies.

But the more durable insight from watching Falcon Heavy lift off today is about the infrastructure of ambition. The rocket that launched a Tesla Roadster toward Mars for a demo flight in 2018 has, in twelve missions, launched classified military satellites, a spacecraft headed for Jupiter, weather observation platforms critical for hurricane forecasting, and now the final piece of the first commercially deployed global multi-terabit broadband constellation. It has done so at a fraction of what its predecessors cost, with a booster recovery system that turns what used to be expensive expendable stages into reusable assets.

That is the story the launch market keeps telling, in different configurations and with different payloads: that the economics of access to space have been permanently disrupted, that the disruption is still accelerating, and that the satellites we put up today will operate in a world the launch industry of a decade ago could not have anticipated. ViaSat-3 F3 will look down from 35,786 kilometers at a world connected in ways its designers planned for, and ways they did not. That is, perhaps, the most precise definition of infrastructure worth building.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

Analysis

San Francisco, AI Capital of the World, Is an Economic Laggard

Published

on

Artificial intelligence is creating unprecedented wealth at unprecedented speed. Its heartland is not.

On a drizzly Tuesday morning in the Mission District, a billboard advertising a generative AI platform — “Think Faster. Build Smarter. Scale Infinitely.” — towers over a sidewalk encampment where a dozen tents have been a fixture since 2022. Two blocks south, a gleaming co-working space charges $900 a month for a hot desk. Two blocks north, the food bank queue stretches past a mural of César Chávez. This is San Francisco in the age of artificial intelligence: a city simultaneously at the vanguard of history and strangely marooned by it.

The numbers are, by any reckoning, staggering. OpenAI is now valued at $300 billion, a figure that exceeds the GDP of most sovereign nations. Anthropic, its chief rival and fellow San Francisco resident, has attracted a cumulative $12 billion-plus in investment from Amazon and Google alone. Together with Databricks, Scale AI, and more than 90 other Bay Area AI unicorns — firms valued privately at over $1 billion — the region now hosts what economists at the Federal Reserve Bank of San Francisco have described as the most concentrated accumulation of venture-backed artificial intelligence capital in modern economic history. The Bay Area accounts for well over 60 percent of all U.S. AI venture investment, a ratio that has tightened rather than loosened as the boom has matured.

And yet San Francisco, the city itself, is struggling. Not in the polite way that prosperous cities occasionally describe mild slowdowns, but in measurable, sometimes painful ways that resist easy dismissal. Its office vacancy rate has hovered near 35 percent — the highest of any major American city — even as AI firms sign glossy leases in South of Market. The San Francisco Controller’s Office has reported persistent year-over-year declines in sales tax revenues from commercial corridors including the Tenderloin, Civic Center, and parts of SoMa. Overall city payroll employment remains below its 2019 peak. The city’s unemployment rate, which reached 6.1 percent in early 2024, has normalized but remains structurally elevated by the standards of the surrounding Bay Area. A Bureau of Labor Statistics analysis of metropolitan employment trends shows San Francisco County adding technology jobs at a rate significantly slower than Austin, Seattle, and even smaller metros like Raleigh-Durham — cities that lack anything approaching San Francisco’s density of AI valuation.

The paradox is not a curiosity. It is, I would argue, one of the defining economic puzzles of our era, and its resolution has profound consequences for how policymakers, urban planners, and civic leaders worldwide think about the geography of innovation.

The Boom That Doesn’t Boom

To understand why the AI wealth explosion has not translated into broad San Francisco prosperity, it helps to contrast the current moment with earlier technology cycles. The dot-com era of the late 1990s was, economically speaking, a mess — but it was a democratically distributed mess. Web startups hired copywriters, office managers, receptionists, catering staff, and building contractors in droves. The city’s employment base swelled. Restaurants in SoMa ran three seatings on weeknights. The construction crane became the defining civic symbol. When the crash came in 2001, it wiped out paper fortunes but had generated real intermediate employment across a wide swath of the local economy.

The social media boom of the 2010s was more capital-efficient, but its infrastructure still required armies of content moderators, trust and safety reviewers, logistics workers, and a sprawling class of middle-income tech employees — product managers, UX researchers, data analysts — who bought homes in Bernal Heights and spent meaningfully in neighborhood economies. As FRBSF economists noted at the time, each technology job in the Bay Area generated approximately five additional local jobs through multiplier effects: the phenomenon economists call the “local multiplier.”

The AI boom is structurally different, and that difference is not accidental. Frontier AI development is, by design, extraordinarily capital-intensive and astonishingly labor-light relative to the valuations involved. OpenAI employs roughly 3,500 people globally — a workforce smaller than many mid-tier law firms — while commanding a valuation that exceeds ExxonMobil. Anthropic employs fewer than 1,000. The economics are not those of the dot-com era, with its profligate hiring; they are closer to those of the oil industry, where massive capital pools concentrate wealth among small technical elites and equity holders while the multiplier effects to broader communities remain stubbornly thin. “These are platform technologies, not employment technologies,” as one prominent Bay Area economist, who requested not to be named due to relationships with venture-backed firms, put it to me. “The value accrues to the equity table. The city’s tax base doesn’t feel it the same way.”

The K-Shaped City

The bifurcation this creates has given rise to what urban economists increasingly call the “K-shaped” San Francisco — a local variant of the macroeconomic phenomenon that gained currency during the pandemic’s uneven recovery. At the top of the K, AI founders, early employees with equity, and venture capitalists are accumulating wealth at rates with few peacetime precedents. Median home prices in Pacific Heights and Noe Valley have crossed $2.2 million, sustained not by broad middle-class demand but by a thin layer of extraordinary earners bidding aggressively against one another for a constrained housing stock. A three-bedroom in the Inner Sunset now draws multiple offers above $1.8 million, primarily from engineers with restricted stock units in companies most Americans have never heard of.

At the bottom of the K, conditions are considerably bleaker. San Francisco’s homeless population — estimated by the 2024 Point-in-Time Count at over 7,000 individuals unsheltered on any given night — has not declined meaningfully despite years of city expenditure exceeding $700 million annually on homelessness programs. The San Francisco Unified School District is cutting programs amid declining enrollment, as middle-class families — the teachers, nurses, civil servants, and small business owners who once comprised the city’s civic backbone — are displaced to Contra Costa County, Sacramento, or out of the state entirely. The Mission District, historically the city’s Latino working-class heart, has seen commercial vacancy rates rise and longtime restaurants shutter, replaced by AI-adjacent amenity businesses — cold-brew concept cafés, biohacking studios, prompt-engineering bootcamps — that cater to a narrow professional stratum.

This is not merely a humanitarian concern. It is an economic one. Cities function as ecosystems, and the systematic displacement of intermediate-income households corrodes civic infrastructure in ways that eventually undermine even the elite economy they house. When a Financial Times analysis of U.S. innovation hubs found that cities with the highest income inequality consistently show lower rates of long-run per capita GDP growth, San Francisco’s trajectory begins to look less like a triumph of creative destruction and more like a case study in what economists call “extractive urbanism.”

The Geography of the New Boom

There is a further wrinkle that standard economic analysis tends to understate: the AI boom is not happening in San Francisco in the way that previous cycles were. It is happening near San Francisco, in ways that direct economic activity away from the city proper.

OpenAI’s headquarters are in Mission District, yes — but its massive new data center investments are in Texas and Iowa, where land is cheap and power is abundant. Anthropic’s principal offices are in San Francisco, but its computational infrastructure runs on AWS servers in Northern Virginia. The physical apparatus of AI — the chips, the cooling systems, the high-voltage power grids — is deployed wherever real estate and regulatory conditions are most favorable, which is almost never an expensive American coastal city. NVIDIA, the company that has perhaps done more than any other to make the AI boom possible, is headquartered in Santa Clara. Its revenue — now exceeding $130 billion annually — flows to shareholders and employees distributed globally, with relatively modest footprint in San Francisco’s commercial property or retail tax base.

Meanwhile, within the Bay Area itself, the center of gravity of AI office activity has shifted from the downtown Financial District — where vacancy remains cavernous — toward specific corridors in SoMa, Mission Bay, and increasingly to the Peninsula cities of Palo Alto and Menlo Park. This is consequential because San Francisco’s tax structure is highly sensitive to downtown commercial activity. The city’s gross receipts and payroll taxes, which generate a substantial portion of the general fund, correlate strongly with downtown office utilization. A CBRE market report from early 2026 found that while AI firms account for the majority of new San Francisco office leases by square footage, average lease sizes are modest — reflecting smaller headcount per dollar of valuation than any previous technology cycle — and many are structured as flexible or short-term arrangements that generate lower assessed values.

The Talent Paradox

The AI boom has also introduced a talent paradox that complicates simplistic narratives about technology creating broadly-shared prosperity. AI frontier labs do not hire broadly — they hire extraordinarily selectively. The competition for PhD-level machine learning researchers has driven starting compensation packages — salary, signing bonus, and equity — to levels that can exceed $1 million annually at OpenAI and Anthropic. These are not the figures of a democratized labor market. They represent the concentration of enormous economic rents into an extremely small professional cohort, most of whom were educated at a handful of elite universities and many of whom are not originally from San Francisco or even the United States.

For local workers without specialized AI credentials, the labor market effects are mixed at best and negative at worst. Research from the Brookings Institution suggests that AI automation is already displacing routine cognitive tasks in the Bay Area — in law, in finance, in customer service — faster than new AI-specific employment is being created for non-specialist workers. A legal secretary in a San Francisco firm, a junior financial analyst at a wealth management boutique, a graphic designer at a marketing agency: these roles are being restructured or eliminated at a pace that the AI boom’s most enthusiastic advocates rarely acknowledge. The net employment effect locally may be, for now, close to zero for workers without advanced technical qualifications — and negative in some sectors.

Policy Implications and the Risk of Imitation

San Francisco’s predicament carries urgent implications for the dozens of cities and regional governments worldwide that are racing to position themselves as “AI hubs” — from London’s Silicon Roundabout to Seoul’s Digital Innovation District, from Dubai’s AI Quarter to Paris’s Station F. The implicit logic of these initiatives is that concentrating AI capital and talent generates broad local prosperity. San Francisco’s experience suggests the causality is considerably weaker than assumed.

What might more inclusive AI urbanism look like? Several interventions merit serious consideration. First, taxation structures designed for an earlier technology era may be poorly calibrated for AI economics. A gross receipts tax that applies equally to a labor-intensive restaurant and a capital-intensive AI lab captures very different slices of economic activity. Policymakers in San Francisco — and elsewhere — should explore mechanisms that capture a larger share of the capital gains and equity appreciation generated by AI firms, rather than relying primarily on payroll and commercial activity taxes that AI firms generate only modestly.

Second, housing supply is not a peripheral concern. The bifurcated real estate market that AI wealth is intensifying actively destroys the intermediate-income households whose presence makes a city function. Serious upzoning — not the incrementalist versions that California has periodically attempted — combined with mandatory inclusionary requirements calibrated to actual construction costs, is an economic necessity, not merely a social preference.

Third, there is a role for proactive investment in AI-adjacent skills among existing residents. The notion that AI’s benefits will trickle down automatically is not supported by San Francisco’s data. Active reskilling programs, community college partnerships with AI firms, and apprenticeship models — of the kind that Germany’s Fraunhofer Institutes have pioneered for industrial technology — represent a more deliberate approach to inclusive AI growth.

The Longer View

It would be premature to conclude that San Francisco’s current economic weakness is permanent. Technology cycles are long, and second-order effects take time to materialize. The dot-com crash of 2001 looked, in the moment, like an economic catastrophe from which the city might never recover. A decade later, the mobile and social media boom had transformed San Francisco into one of the most dynamic urban economies in the world.

It is possible — perhaps even probable — that AI will eventually generate broader employment effects as the technology matures, as AI-native businesses proliferate beyond the frontier labs, and as demand for AI-enabled products and services creates new categories of work that are difficult to foresee today. Historians of technology, from Joel Mokyr to David Autor, have consistently found that transformative technologies ultimately create more employment than they destroy, even if the transition imposes severe distributional costs.

But the transition is the point. San Francisco is living through the transition right now, and its current management of that transition — the housing dysfunction, the displacement of intermediate-income households, the failure of AI wealth to flow through the city’s fiscal architecture — will determine whether the city emerges from this moment as a model or a cautionary tale.

The AI billboard in the Mission District promises to think faster, build smarter, scale infinitely. Below it, a man in a faded blue sleeping bag stirs as the morning fog burns off the Bay. San Francisco has always been a city of extraordinary distances between aspiration and reality. The AI boom has simply made those distances more visible, and the urgency of closing them more acute.

The world is watching. San Francisco, for its own sake and for the sake of every city that hopes to follow its model, would do well to notice.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

Analysis

When the Buyer and Seller Are the Same Person: Private Equity’s Self-Dealing Crisis

Published

on

Major institutional investors are sounding the alarm over continuation funds and related-party transactions. They are right to do so — and the industry’s window to self-correct is narrowing fast.

The most elegant swindle in finance is the one where the victim cannot quite prove they were robbed. Private equity has, over the past decade, constructed a variant of precisely this: a transaction structure in which the same firm simultaneously acts as seller, buyer, valuation agent, and vote-counter — all in a single, legally defensible package. It is called a continuation vehicle. And its explosive rise, from a niche exit technique to the dominant method of recycling assets in a frozen deal market, has finally brought the industry’s long-simmering conflict-of-interest problem to a boil.

On April 27, 2026, the Financial Times reported that major private equity backers — pension funds, sovereign wealth managers, endowments — are raising fresh, organised concerns over what they increasingly call “sweetheart deals”: transactions in which buyout firms appear to engineer outcomes that serve the general partner’s economic interests at the expense of the limited partners who actually supplied the capital. The concerns are not new. What is new is the volume, the specificity, and the seniority of the voices now making them.

This matters enormously. Private equity manages north of $7 trillion in assets globally, a sum that encompasses not just the portfolios of family offices and hedge funds but the retirement savings of nurses, teachers, and municipal workers whose pension funds have spent two decades increasing their allocations to the asset class. When the governance framework underpinning those allocations begins to crack, the consequences radiate far beyond the quarterly LP meeting.

The Continuation Fund Explosion — and Its Discontents

To understand the complaint, one must first understand the structure. A continuation vehicle — sometimes called a GP-led secondary — is, in essence, a mechanism by which a private equity firm transfers an asset from one of its funds into a newly created fund that it also manages. Existing limited partners are given a choice: cash out at the offered price, or roll their interest into the new vehicle. New investors — typically secondary market funds — provide fresh capital. The general partner earns new management fees and, critically, resets the carried-interest clock.

The structure was once deployed sparingly and, in many instances, genuinely beneficial: a prized asset requiring more time and capital to reach its potential, a strategic rationale for a longer hold that reasonable investors could assess. Those use cases still exist. But the market has mutated far beyond them.

Just 5% of private equity exits were continuation funds in 2021. By 2024 that figure had risen to 13%, and by 2025 it had reached approximately 20% — meaning one in five asset sales now involves a firm effectively selling to itself. The total dollar value of continuation funds was expected to hit $100 billion by the end of 2025, compared to $35 billion in 2019, according to Evercore data cited by the New York Times. By Q3 2025, US and European private equity firms had completed 105 continuation fund deals, a sizeable increase on the 87 sealed in the first three quarters of 2024.

The structural driver is not hard to find. With over $3 trillion of unrealised value sitting in global buyout portfolios, the three-to-five-year average holding period is firmly in the past; five to six years is the new baseline. IPO markets remain effectively closed to many sponsor-backed businesses. Trade sale multiples have compressed under the weight of higher interest rates and buyer caution. And global PE fundraising dropped 3.8% from $724 billion in 2024 to just under $700 billion in 2025 — the lowest level in more than five years. GPs desperate to show distributions, keep the fee stream alive, and retain the optionality to raise their next fund have discovered that continuation vehicles solve all three problems simultaneously.

This is the nub of the abuse: the incentives are completely misaligned. These transactions inherently involve conflicts of interest, as the sponsor effectively acts as both seller and buyer, and they present heightened sensitivity around valuation. The GP has every incentive to price the asset low enough to attract secondary buyers (who expect a discount to fair value) and to structure the carry reset in its own favour — all while presenting existing LPs with an artificial choice between accepting the offered price or remaining exposed to an illiquid position under the same manager who just demonstrated questionable judgment in setting the price.

Abu Dhabi’s Lawsuit and What It Reveals

The clearest window into the anatomy of a disputed continuation fund comes from a Delaware courtroom. On November 26, 2025, Abu Dhabi Investment Council (ADIC), part of the approximately $300 billion Mubadala investment group, filed a lawsuit in the Chancery Court of Delaware against Energy & Minerals Group LP and several affiliated private investment funds. The complaint alleged that EMG was engineering a conflicted, below-market sale of its stake in Ascent Resources — the largest private natural gas company in the United States — into an EMG-sponsored continuation vehicle. Energy & Minerals Group had already lined up investors for a continuation fund of at least $800 million when ADIC sued and halted the process.

The specific allegations are instructive. ADIC claims that EMG told existing LPs that Ascent was in bad shape, unable to go public or be sold, while telling prospective CV investors the opposite. Moreover, ADIC claims that the continuation vehicle would have reset management fees and carry on Ascent in a way that would have benefited the general partner. The complaint also alleged that EMG tried to force an LP vote on very short notice, provided different information to different investor groups, and declined to allow LPs to confer privately before the vote. According to ADIC, the transaction would harm limited partners, confer substantial benefits on EMG insiders, and allow the manager to reset performance-fee economics on an asset that would be unlikely to generate carried interest if sold in a conventional exit or public offering.

The case has since drawn in additional parties. Hedge fund Mason Capital Management accused law firm Kirkland & Ellis of providing conflicted legal advice, arguing that Kirkland was conflicted because it advises the company’s directors while also representing EMG, Ascent’s private equity sponsor. Kirkland & Ellis is, notably, the most prominent legal adviser to the private equity industry globally — a fact that underscores how deeply the conflicts can proliferate through a single transaction’s stakeholder map.

ADIC v. EMG is not an isolated incident. Similar litigation was filed in the Southern District of Florida, where plaintiffs alleged a claim for aiding and abetting a breach of fiduciary duty against private equity firm HIG Capital in connection with a continuation vehicle transaction. The pattern emerging across these cases is not one of rogue actors but of structural incentives that almost inevitably produce conflicted behaviour when left unchecked.

The Regulatory Vacuum — and Who Filled It With Nothing

For a brief moment in 2023, it appeared that American regulators might impose meaningful guardrails. The Securities and Exchange Commission, under Gary Gensler, adopted a sweeping set of Private Fund Adviser Rules that would have required, among other things, enhanced disclosure to investors, fairness opinions for adviser-led secondary transactions, and quarterly statements with transparent fee reporting. The industry pushed back with considerable force.

In a unanimous decision published on June 5, 2024, the US Court of Appeals for the Fifth Circuit vacated the entire set of regulations, ruling that the SEC had exceeded its statutory authority. The Court found that the SEC cannot issue rules that affect the “internal governance structure” of private funds, reasoning that by congressional design, private funds are exempt from regulation over their internal governance. For the private equity industry, it was an extraordinary legal victory. For institutional investors quietly watching the ADIC case unfold from Abu Dhabi to Houston to Wilmington, it was a warning shot in the opposite direction.

The regulatory vacuum that now exists is not benign. It does not mean that GPs can behave however they wish without consequence — common law fiduciary duties still apply, as the Delaware courts regularly remind the industry. But it does mean that the primary enforcement mechanism for LP protection is expensive litigation after the fact, which remains accessible mainly to large sovereign wealth funds, not to the pension plan of a mid-sized American municipality.

What fills the gap? Largely: LP advisory committees (LPACs), whose power varies enormously depending on negotiating leverage at the point of fund subscription; independent fairness opinions, which the GP selects and pays for; and the reputational discipline of the fundraising cycle, which applies only so long as the GP needs to raise another fund. Nearly half of asset managers are already using continuation funds to unlock liquidity, a scale at which informal norms are plainly insufficient.

The Broader Erosion of Trust

The sweetheart deal problem sits within a larger story of governance drift. Private equity’s extraordinary run of returns between 2009 and 2021 was achieved partly through genuine operational value creation and partly through the secular tailwind of falling interest rates inflating all asset values — a tailwind now demonstrably exhausted. The industry’s standard defence — that LPs are sophisticated parties who negotiated their terms and can walk away — was always more reassuring in theory than in practice.

Complexity as camouflage. Fee structures in private equity have proliferated to a degree that makes honest comparison nearly impossible. Management fees, monitoring fees, transaction fees, portfolio company consulting fees, and now NAV facility costs have been layered together in documents that run to hundreds of pages. The Fifth Circuit’s ruling that disclosure failures cannot constitute fraud when no duty to disclose exists is, in this context, a remarkable legal proposition: it essentially enshrines opacity as a protected feature of the asset class.

The retail investor time bomb. With sovereign wealth funds and family offices equipped with patient, low-leverage capital continuing to expand their presence, some of the more disciplined institutional money is growing selectively cautious. Into that gap is flowing retail capital — through the rapid expansion of evergreen funds, business development companies (BDCs), and interval funds now being marketed to wealthy individuals and, increasingly, defined-contribution pension plans. These vehicles expose retail investors to the full complexity of continuation fund conflicts without any of the negotiating leverage that a $300 billion sovereign wealth fund possesses.

The dry powder paradox. Dry powder reached a record high of $1.1 trillion, yet fundraising is falling. The apparent contradiction resolves this way: capital is concentrating at the top of the market, as LPs double down on the largest, most established managers while growing wary of the mid-market. The top ten [private equity funds] took their largest share of US fundraising in more than a decade. This concentration is itself a governance risk: it reduces competitive pressure on the largest GPs to reform their practices, since their fundraising success no longer depends on the goodwill of any individual LP.

The Industry’s Legitimate Defence — and Its Limits

To be fair about this — and the analysis demands fairness — continuation vehicles, done properly, are genuinely useful instruments. They allow high-quality businesses to reach their full potential rather than being rushed to a trade sale at an inopportune moment. Secondary market buyers provide genuine price discovery. Independent LPAC oversight, where robust, can catch and correct the most egregious valuation games. Blackstone, KKR, Apollo, and other large-cap managers have invested substantially in governance infrastructure precisely because their reputations are worth protecting.

The defence the industry most commonly deploys — that LP consent provides adequate protection — is true in its strongest cases and meaningless in its weakest. When an LP advisory committee is stacked with passive investors, when the “consent” vote is conducted on short notice without full information parity, and when the only alternative for a dissenting LP is illiquidity, “consent” is a procedural fiction. The EMG case is instructive precisely because it shows a sophisticated sovereign wealth fund with the resources to litigate feeling that the process was fundamentally rigged against it.

Reforms That Could Actually Work

The industry faces a choice that is more pressing than it currently appreciates. The combination of litigation accumulation, LP sentiment hardening, and the likely return of a more interventionist regulatory environment creates a narrow window for credible self-reform. The following measures would represent genuine progress:

  1. Mandatory independent valuation. Continuation fund transactions should require a fairness opinion from an adviser selected by the LPAC — not the GP — and paid for by the GP. This mirrors standard practice in public company mergers.
  2. Information parity requirements. All material information provided to prospective CV investors must simultaneously be provided to existing LPs. The asymmetric disclosure alleged in the EMG case — telling old investors the asset is stranded while telling new investors it is a hidden gem — should be explicitly prohibited in partnership agreements.
  3. Enhanced LPAC composition standards. Institutional investors should negotiate, at the fund-subscription stage, for LP advisory committees that include representatives with genuine independence from the GP’s existing commercial relationships.
  4. Carry reset restrictions. The practice of resetting carried interest on the same asset when it moves into a continuation vehicle should be subject to a clear disclosure requirement and, ideally, a high-consent threshold from existing LPs.
  5. Standardised disclosure frameworks. Industry bodies including the Institutional Limited Partners Association (ILPA) have published guidance on GP-led secondaries that is increasingly widely adopted. Making ILPA best practices a baseline expectation — rather than an aspirational aspiration — would significantly raise the floor.

The macroeconomic environment makes reform urgent. With so much stuck capital and secondary markets on a roll, longer holding periods will drive more LPs and GPs to sell portfolios and release capital. Continuation vehicles will not disappear — nor should they. But their legitimacy depends on a governance framework that does not currently exist at the necessary scale.

The Long Shadow of Scandalous Precedent

Those with long institutional memories will recall that the savings-and-loan crisis, the collapse of Long-Term Capital Management, and the structured credit debacle that became the 2008 financial crisis all shared a common early chapter: sophisticated actors convincing themselves that conflicts of interest were “managed” until they manifestly were not, while regulators and investors were reassured by the complexity of the instruments involved.

Private equity is not facing a systemic risk of that order — the leverage is more contained, the assets more heterogeneous. But the governance rot, left untended, has a well-established tendency to compound. The institutional investors raising alarms in April 2026 are doing so from a position of comparative strength. They still have capital to deploy; they still have leverage in the fundraising conversation; they still have the reputational threat that matters to GPs with funds to raise.

That leverage diminishes the longer the conversation remains abstract. What is needed now is not another ILPA consultation paper or another conference panel on “alignment of interests.” What is needed is a clear, industry-wide recognition that selling assets to yourself, at a price you set, under terms you wrote, is not merely a governance gray area — it is a fundamental challenge to the proposition that private equity serves anyone other than its own general partners.

The investors are right to sound the alarm. The question is whether the industry will hear it before the courts, the regulators, or the capital flows make the answer irrelevant.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending

Copyright © 2025 The Economy, Inc . All rights reserved .

Discover more from The Economy

Subscribe now to keep reading and get access to the full archive.

Continue reading