Analysis
Agency in the Age of AI: Why Human Initiative — Not Artificial Agents — Will Define the Next Decade
On February 15, 2026, Sam Altman posted two sentences to X that encapsulated a decade of Silicon Valley ambition in a single breath. OpenAI had acquired OpenClaw, an open-source AI agent framework that could autonomously browse, code, and execute complex multi-step tasks — and its creator, Peter Steinberger, was joining the company to “bring agents to everyone.” The deal was quiet by tech-acquisition standards. No press conference. No billion-dollar number dropped to gasps at a conference. Just a pair of tweets that, read carefully, amount to a civilizational declaration: the age of artificial agents — AI systems that act on your behalf, that do rather than merely say — has arrived.
The question no one in those tweets was asking is the one that ought to keep us up at night. Not what will AI agents do for us? But what will they do to us?
Agency in the age of AI is not, at its core, a technology question. It is a human one. And across law firms, accounting houses, actuarial desks, and the laptops of twenty-four-year-olds trying to build careers in knowledge work, the contours of that question are becoming impossible to ignore.
The Rise of Autonomous Agents — And the Hidden Cost to Human Agency
“Agentic AI” is the industry’s term of the moment, and it deserves a plain-language translation: these are AI systems that do not merely answer questions but complete tasks — booking travel, filing documents, auditing spreadsheets, drafting briefs, managing inboxes — with minimal human instruction and, in many configurations, minimal human oversight. OpenAI’s Frontier platform, launched in February 2026 and described as a home for “AI coworkers,” gives enterprises AI systems with shared context, persistent memory, and permissions to act inside live business workflows.
The promise is intoxicating. The average knowledge worker, Silicon Valley’s pitch goes, will soon command a small army of autonomous agents the way a senior partner commands junior associates. Scale your output. Compress your timelines. Democratize expertise.
What this narrative conspicuously omits is what happens to the junior associates.
The hidden cost of autonomous agents is not primarily economic, though the economic costs are real and arriving faster than most forecasts anticipated. It is something harder to quantify and easier to dismiss: the erosion of the conditions under which human agency develops, deepens, and compounds over a life. The young lawyer who never drafts her first clumsy brief. The accountant who never wrestles with his first gnarly audit. The actuary who never builds intuition through the friction of getting it wrong. Agency — the capacity to act, judge, and take meaningful initiative in the world — is not innate. It is cultivated. And the cultivation requires doing the hard, error-prone, occasionally humiliating work that AI agents are now absorbing at scale.
This is not a Luddite argument. It is a developmental one. And it is urgent.
Why Lawyers, Accountants, and Actuaries Are Questioning Their Futures
The conversation has broken into the open in the corridors of professional services with a candor that would have been unthinkable three years ago. Senior partners at major law firms will tell you, off the record, that they have paused or sharply curtailed junior associate hiring. The work that used to season young talent — contract review, discovery, due diligence — is being absorbed by AI agents with an efficiency that makes the economics of junior staffing almost impossible to justify.
The data corroborates what the corridors are whispering. Goldman Sachs Research reported in April 2026 that AI is erasing roughly 16,000 net U.S. jobs per month — approximately 25,000 displaced by AI substitution against 9,000 new positions created by AI augmentation. The occupations most exposed to substitution, Goldman’s economists found, include accountants and auditors, legal and administrative assistants, credit analysts, and telemarketers: precisely the entry-level and mid-career roles that have historically served as the scaffolding of professional development.
The generational impact is particularly sharp. Goldman Sachs found that unemployment among 20- to 30-year-olds in AI-exposed occupations has risen by nearly three percentage points since the start of 2025 — significantly higher than for older workers in the same fields. Entry-level hiring at the top fifteen technology companies fell 25 percent between 2023 and 2024, and continued declining through 2025. The AI-related share of layoffs discussed on S&P 500 earnings calls grew to just above 15 percent by late 2025, up sharply from the year prior.
The career advice for young professionals navigating the AI age in 2026 used to be: develop technical skills, stay adaptable, embrace tools. That advice, while still valid, has become insufficient. What young professionals now face is a more fundamental disruption: the removal of the proving grounds where professional judgment is forged. You cannot develop the discernment of a seasoned litigator if the briefs are always already written. You cannot build the instincts of a skilled auditor if the anomalies are always already flagged.
The global picture adds further texture. In Southeast Asia, AI agents replacing jobs in BPO (business process outsourcing) — a sector employing hundreds of millions across the Philippines, India, and Vietnam — are compressing opportunities for a generation that had, through those very jobs, entered the formal economy and begun building transferable skills. In sub-Saharan Africa, where formal professional employment is expanding and could absorb more talent, the risk is that AI-agent adoption by multinationals shortcircuits the very job categories through which that transition happens. The AI agents replacing lawyers accountants and junior professionals in New York and London do not stay politely within American and European borders.
Pew’s 2025–2026 Data: Americans Demand More Control Over AI
The public has registered its discomfort — clearly, consistently, and in terms that policymakers should find impossible to dismiss.
Pew Research Center’s June 2025 survey of 5,023 U.S. adults found that 50 percent say the increased use of AI in daily life makes them feel more concerned than excited — up from 37 percent in 2021. More than half of respondents (57 percent) rated the societal risks of AI as high, against just 25 percent who say the benefits are similarly high. Majorities reported pessimism about AI’s impact on human creativity (53 percent say it will worsen people’s ability to think creatively) and meaningful relationships (50 percent say it will worsen our capacity to form them).
These are not the views of technophobes. They are the views of citizens watching something happen to their world and struggling to articulate, against the momentum of trillion-dollar valuations and breathless press coverage, what exactly it is they are losing.
The Pew data on control is the most politically significant finding of recent years. Fifty-five percent of U.S. adults say they want more control over how AI is used in their own lives. Among AI experts themselves — people who have built careers in the field — the figure is 57 percent. The demand for human agency in the AI era is not a fringe sentiment or a technophobic reflex. It crosses partisan lines, educational levels, and even the expert-layperson divide. What is remarkable is how little the policy architecture of any major government has responded to it.
In Europe, the EU AI Act has established a framework, but its enforcement mechanisms remain nascent and its treatment of agentic systems is notably underdeveloped for a technology moving at this pace. In the United States, the legislative response has been fragmented, preempted by a political environment in which AI has become entangled with culture-war dynamics that obscure rather than illuminate the actual governance questions. In China, regulatory assertiveness on AI coexists with state-directed deployment that raises its own agency concerns — for the individual citizen, not the system.
The gap between what people want — more control, more say, more human agency in the AI era — and what institutions are delivering is widening. It is into this gap that the next generation of social innovators, philanthropists, and policymakers must step.
Philanthropy’s Critical Role in Shaping AI Guardrails and Opportunity
Here is where the story gets interesting — and where institutional funders, foundations, and philanthropic capital have a genuinely historic role to play that they have, with a handful of exceptions, yet to fully embrace.
The governance of AI — particularly of agentic AI systems acting autonomously in high-stakes domains — cannot be left to the companies building it, to legislators who struggle to define a “large language model” without staff assistance, or to the uncoordinated preferences of individual consumers. The OECD and the World Economic Forum have outlined frameworks, but frameworks without funding are architectural drawings without builders.
Philanthropy AI governance has become one of the most consequential and underfunded intersections in public life. The MacArthur Foundation, Ford Foundation, and a handful of tech-originated donors (Omidyar Network, Schmidt Futures) have begun investing in responsible AI research and policy. But the scale of investment remains dramatically misaligned with the scale of the disruption underway. According to the Brookings Institution, the communities most exposed to AI displacement — lower-income workers, first-generation professionals, workers in routine cognitive roles — are precisely those with the least access to reskilling resources, legal literacy about their rights, and political power to shape the governance conversation.
Philanthropic capital can address this at multiple levels. First, funding public dialogue: creating the forums, commissions, and civic processes through which communities can articulate what they want from AI and what they will not accept — the kind of deliberative democracy that corporate AI development timelines do not organically produce. Second, building ethical guardrails: supporting independent technical audits of AI agent systems, especially those deployed in high-stakes contexts like hiring, credit, legal aid, and healthcare. Third, investing aggressively in reskilling: not the corporate upskilling programs that optimize for the needs of existing employers, but the genuinely human-centered education investments that give people the capacity to navigate a changed economy on their own terms. Fourth, and most visibly, creating opportunity for young people — the generation that stands to be most directly affected by the removal of the proving grounds of professional learning.
The philanthropic AI governance opportunity is not about slowing innovation. It is about ensuring that the benefits of innovation are not captured exclusively by those who already own the infrastructure, while the costs — in disrupted careers, eroded agency, and stunted development — are borne by everyone else.
Reclaiming Agency: What Young People, Leaders, and Funders Must Do Now
The future of human agency in the AI era will not be decided in Palo Alto. It will be decided in classrooms, in courtrooms, in legislative chambers, in the board rooms of foundations, and in the daily choices of individuals about which tasks they hand to machines and which they insist on doing themselves — not because machines cannot do them, but because the doing is the point.
For young professionals — the generation navigating career advice in the AI age of 2026 — the imperative is not to compete with AI agents on their own terms. That is a race designed for machines. The imperative is to cultivate what agents cannot: moral judgment, relational intelligence, contextual wisdom, creative vision, the capacity to care about what you’re doing and why. These are not soft skills. They are the hardest skills. They compound over a lifetime in ways that no model weight or token count does. Protect your learning curve fiercely. Seek out the friction that develops judgment. Resist the temptation to outsource your thinking to systems that are, however impressive, fundamentally indifferent to your growth.
For leaders — in business, government, education, and civil society — the reclamation of agency requires building institutions that are honest about trade-offs. Does AI erode human agency? In its current deployment trajectory: yes, in specific and important ways. The right response is not panic, and it is not denial. It is design. Invest in human-AI collaboration frameworks that genuinely keep humans in the loop, not as a compliance formality but as a developmental reality. Design apprenticeship and mentorship structures that survive the automation of the tasks around which they were traditionally built. Insist on AI impact assessments before deploying agentic systems in professional and educational contexts. Make the question of human development central to every AI deployment decision, not an afterthought.
For funders: this is the decade. The governance architecture being built — or not built — around agentic AI will shape the relationship between human agency and technological systems for a generation. The window for influence is not permanently open. Foundations that move early, with real capital and genuine intellectual seriousness, can help write the rules. Foundations that wait will be left funding the repair.
The global dimension matters here, too. The most consequential AI governance battles of the next decade may not be fought in Washington or Brussels, but in the Global South — in countries where the intersection of demographic youth, expanding educational access, and AI-driven disruption of professional labor markets creates conditions for either extraordinary opportunity or extraordinary waste of human potential. Philanthropic AI governance that ignores Lagos, Jakarta, and São Paulo is not global governance. It is just wealthy-country governance wearing a global mask.
The story Silicon Valley is telling about the age of AI is seductive and, in many of its details, accurate. Autonomous agents will transform professional life. Productivity will rise. Some categories of work will disappear and others will emerge. The arc, the industry insists, bends toward abundance.
What the story omits is the quality of the lives lived along that arc. The lawyer who never argued. The accountant who never judged. The twenty-three-year-old who handed her first decade of professional development to a system that learned everything and taught her nothing.
Agency in the age of AI is not a footnote to the productivity story. It is the story that matters most.
Two tweets launched the age of agentic AI. What we do next — in philanthropy, in policy, in education, in the daily texture of our professional and personal choices — will determine whether this age expands or diminishes what it means to be a capable, purposeful human being.
The question is not what AI agents will do for us. The question is what kind of agents we will choose to become.