Connect with us

AI

The Hidden Cost of AI ‘Workslop’: Why Professionals Are Creating It — and How Organisations Can Stop It

Published

on

On a frigid Tuesday morning in January, a senior product manager at a Fortune 500 technology company opened what appeared to be a thoughtful three-page strategy memo from her colleague. The formatting was impeccable. The executive summary promised “actionable insights.” But as she read deeper, something felt wrong. The prose was oddly verbose yet strangely hollow—sentences that said everything and nothing simultaneously. Bullet points proliferated without prioritisation. Key decisions were buried in passive constructions. By the third paragraph, she recognised the telltale signs: this was AI-generated work, polished just enough to seem legitimate, but fundamentally empty.

She’d just encountered workslop.

Welcome to 2026’s defining workplace problem—one that paradoxically intensifies even as organisations invest billions in generative AI to boost productivity. While executives herald artificial intelligence as the great accelerator of knowledge work, something darker is emerging from the spreadsheets: a flood of low-quality AI generated content that masquerades as professional output while offloading cognitive labour onto everyone else.

What Is AI Workslop—and Why Should Leaders Care?

The term “workslop,” coined by researchers at Stanford University and BetterUp in 2025, describes AI-generated workplace content that meets minimum formatting standards but lacks substance, clarity, or genuine insight. Think of it as the professional equivalent of content farm articles: superficially plausible, fundamentally worthless, and designed more to signal effort than to communicate ideas.

Workslop AI manifests across every digital workplace surface. That rambling email that could’ve been two sentences. The slide deck with stock phrases like “synergistic opportunities” and “strategic imperatives” but no actual strategy. The meeting summary that somehow requires three pages to convey what everyone already discussed. The report that reads like a thesaurus exploded onto a template.

Unlike obviously bad writing, workslop is insidious precisely because it appears acceptable at first glance. It has proper grammar, professional vocabulary, formatted headers. It follows templates. But consuming it—trying to extract actual meaning—becomes exhausting cognitive work that the creator has outsourced to the reader.

According to research published in Harvard Business Review in January 2026, the average knowledge worker now encounters workslop in roughly 35% of internal communications, up from virtually zero two years ago. More alarmingly, the same research found that processing workslop consumes approximately four hours per week of professional time—time spent deciphering, clarifying, and essentially doing the cognitive work the original creator avoided.

The math is brutal. For a 1,000-person organisation where the average employee earns $80,000 annually, that’s approximately $9.2 million in annual productivity loss. And that’s the conservative estimate, accounting only for direct time costs. It excludes strategic errors from misunderstood communications, damaged professional relationships, and the slow erosion of organisational trust.

The Generative AI Productivity Paradox Takes Shape

Here’s the uncomfortable truth: we’re witnessing a generative AI productivity paradox.

Organisations have embraced AI tools at unprecedented speed. Forbes reported in late 2025 that 78% of Fortune 1000 companies now provide employees with access to ChatGPT, Claude, or similar platforms. Microsoft Copilot has penetrated 65% of enterprise customers. The promise seemed obvious: automate routine communications, accelerate document creation, amplify individual productivity.

Yet productivity gains remain stubbornly elusive. Research from the National Bureau of Economic Research found that while individuals using AI tools report feeling more productive, their colleagues frequently report the opposite—spending more time on email, meetings, and clarifications. The pattern emerging is stark: AI doesn’t eliminate work; it redistributes it, often unfairly.

When one person uses AI to generate a meandering three-page email in 30 seconds, they’ve saved themselves time. But if that email requires five recipients to spend 10 minutes each deciphering it, the organisation has lost 50 minutes to save one person half a minute of careful writing. It’s productivity theatre masquerading as innovation.

“We’re creating a tragedy of the commons in corporate communications,” explains Dr. Sarah Chen, an organisational psychologist who studies technology adoption. “Every individual has an incentive to use AI to reduce their own cognitive load, but when everyone does it simultaneously, the collective burden actually increases.”

Why Intelligent Professionals Create Workslop: The Psychology of Cognitive Offloading

Understanding how to avoid AI workslop begins with understanding why people create it—and the answer is more nuanced than simple laziness.

The Seduction of Effortless Output

Generative AI tools offer something intoxicating to overwhelmed knowledge workers: instant competence. Faced with a blank screen and a looming deadline, the ability to summon 500 professionally formatted words with a single prompt feels like magic. The cognitive relief is immediate and powerful.

Neuroscience research shows that our brains are wired to take the path of least resistance. When AI offers to handle the “tedious” work of structuring arguments, finding synonyms, or expanding bullet points into paragraphs, declining feels almost irrational. Why struggle with phrasing when the machine can do it instantly?

But here’s what’s lost in that exchange: the struggle is the work. Transforming vague thoughts into precise language forces clarity. Wrestling with how to structure an argument reveals which ideas actually matter. The friction of writing is where understanding happens. When we outsource that friction to AI, we outsource the thinking itself.

Performance Pressure and the AI Arms Race

Many professionals create AI slop workplace content not from laziness but from fear.

In organisations where colleagues are using AI, abstaining feels like unilateral disarmament. If your peer can produce a 20-slide deck in an hour while you’re still outlining yours, are you falling behind? If the team expects rapid-fire email responses and AI makes that possible, can you afford to slow down and craft thoughtful replies?

This dynamic creates a vicious cycle. As The Washington Post reported, many professionals describe feeling “obligated” to use AI tools even when they suspect the output is inferior. The perception that everyone else is using AI—whether accurate or not—becomes self-fulfilling.

“I know my AI-generated status reports aren’t as clear as what I used to write by hand,” admitted one consultant who spoke on condition of anonymity. “But leadership expects them weekly now instead of monthly, and I simply don’t have time to write four thoughtful reports a month. So I prompt, I polish for ten minutes, and I send. I hate that my name is on something mediocre, but what choice do I have?”

Organisational Incentives That Reward Volume Over Value

The workslop epidemic isn’t solely a people problem—it’s a systems problem.

Many organisations have inadvertently created incentive structures that reward the appearance of productivity over actual value creation. When success metrics emphasise deliverables completed, emails sent, or reports filed rather than decisions improved or problems solved, AI becomes an enabler of performative work.

Consider the phenomenon of “AI mandates without guidance.” CNBC documented how several major corporations have encouraged or even required employees to use generative AI tools—framed as “staying competitive” or “embracing innovation”—without providing clear frameworks for appropriate use. The message employees receive is essentially: use AI more, but we won’t tell you when or how.

The result is predictable. If using AI is valorised regardless of outcome, and quality is difficult to measure, employees will use AI for everything. Quantity becomes the proxy for competence.

Tool Design Flaws: When AI Makes Slop Too Easy

Finally, we must acknowledge that current generative AI tools are almost designed to produce workslop.

Most AI assistants operate on a principle of prolixity—when uncertain, they add words. A single sentence of input can yield paragraphs of output, all grammatically correct, much of it filler. The tools don’t naturally distinguish between situations requiring depth and those requiring brevity. They don’t ask, “Is this the right medium for this message?” or “Have I actually said anything meaningful?”

Moreover, the friction required to create workslop is near-zero, while the friction required to create something genuinely good remains high. Generating mediocre content takes one prompt. Creating exceptional content still requires human judgment, iteration, editing—the very work AI was supposed to eliminate.

Until tool designers build in more friction for low-value outputs or more support for high-value thinking, the path of least resistance will continue producing slop.

The Real Cost: Why AI Reduces Productivity Despite Individual Gains

The damage from AI workslop extends far beyond wasted time.

The Productivity Tax Compounds

Research from Axios and workplace analytics firm ActivTrak found that processing low-quality AI content doesn’t just consume time—it fragments attention and depletes decision-making capacity.

When professionals encounter workslop, they face a choice: invest energy trying to extract meaning, or request clarification (which creates more work for everyone). Either option imposes costs. The first depletes cognitive resources needed for strategic work. The second generates additional communication overhead and delays.

Over time, these micro-costs accumulate into macro-dysfunction. Teams spend more time in “alignment meetings” because written communications no longer align anyone. Projects stall because requirements documents are simultaneously verbose and vague. Strategic initiatives falter because the business case was generated rather than reasoned.

“We’re seeing organisations where 60% of email volume is essentially noise,” notes Michael Torres, a management consultant who advises on digital workplace practices. “People have started assuming that anything longer than three paragraphs can be safely ignored, which means genuinely important communications are now getting buried alongside the slop.”

Trust Erosion in Professional Relationships

Perhaps more corrosive than the time cost is the damage to professional credibility and trust.

When colleagues recognise that someone is routinely submitting AI-generated work with minimal thought, respect diminishes. The implicit message is clear: “I don’t value your time enough to think carefully before communicating with you.” Over time, this erodes the social capital required for effective collaboration.

Several organisations interviewed for this article reported a concerning trend: professionals increasingly ignore communications from colleagues known to produce workslop. One executive described creating an informal “filter list” of people whose emails he automatically skims for essential information while disregarding analysis or recommendations.

“It’s a tragedy,” he acknowledged. “Some of these are talented people. But I’ve learned that their AI-generated memos are unreliable, so I just extract the data and ignore their conclusions. That’s probably causing me to miss good ideas, but I don’t have time to sift through the filler.”

This dynamic is particularly damaging for early-career professionals who haven’t yet established reputations. When senior leaders encounter workslop from junior team members, they form lasting impressions about competence and judgment—impressions that may be undeserved but difficult to reverse.

Decision-Making Degradation

Most dangerous is workslop’s impact on organisational decision-making.

AI-generated work problems often hide in the space between what’s written and what’s meant. A strategy recommendation might sound plausible but rest on flawed assumptions the AI didn’t understand. A risk assessment might list generic concerns without identifying the actual specific vulnerabilities. A project post-mortem might catalogue events without extracting lessons.

When leaders make decisions based on AI-generated analysis they assume was human-reasoned, they’re building on potentially unstable foundations. Several executives described situations where strategic decisions were made based on compelling-sounding recommendations, only to discover later that the underlying analysis was superficial—the product of AI summarising publicly available information rather than domain expertise.

“We nearly acquired the wrong company because the due diligence memo was beautifully formatted nonsense,” confided one private equity principal. “The analyst had used AI to expand his notes into a full report, but the AI didn’t understand our investment thesis. We only caught it when someone noticed a logical inconsistency buried in paragraph fourteen.”

Workslop in the Wild: Real-World Examples Across Sectors

To understand the phenomenon’s pervasiveness, consider these anonymised examples from different industries:

Technology sector: A product team at a major software company implemented a policy requiring weekly written updates. Within a month, these updates—once concise and insightful—had bloated to multi-page documents filled with phrases like “optimising for synergistic outcomes” and “leveraging agile methodologies to drive stakeholder value.” Product managers were spending 90 minutes weekly generating these reports and roughly the same reading everyone else’s. Actual status could have been communicated in a 5-minute standup.

Professional services: At a global consulting firm, junior consultants began using AI to draft client deliverables, then having senior partners review and approve. Partners initially appreciated the time savings—until clients started providing feedback that reports were “generic” and “lacking industry insight.” The firm’s differentiation had always been deep contextual understanding; AI was systematically stripping that away. Client renewals declined 12% year-over-year.

Financial services: A European investment bank encouraged traders and analysts to use AI for market commentary and research notes. Within weeks, recipients were complaining that the analysis had become “undifferentiated” and “obvious.” The AI could summarise public information beautifully but couldn’t offer the proprietary insights that justified premium fees. The bank quietly reversed its AI encouragement policy.

Government/public sector: A national regulatory agency (outside the US) began using AI to draft policy guidance documents. The resulting materials were so dense and jargon-heavy that compliance officers reported spending more time interpreting the guidance than they would have under the previous, simpler system. What was intended to accelerate regulatory clarity instead created confusion.

These aren’t isolated incidents. They represent a pattern: organisations adopting AI for efficiency gains, initially seeing positive signals, then discovering that quality degradation imposes costs that eventually exceed the efficiency benefits.

How Organisations Can Stop the Workslop Epidemic: Evidence-Based Solutions

Addressing workslop requires interventions at multiple levels: cultural, structural, and technological. Leading organisations are pioneering approaches that preserve AI’s benefits while preventing its misuse.

1. Establish Clear Guidelines for Appropriate AI Use

The most effective organisations don’t ban AI—they define when and how it should be used.

Financial Times documented how several European firms have implemented “traffic light” frameworks:

  • Green (encouraged): Using AI for initial research, brainstorming, formatting assistance, grammar checking, translation
  • Yellow (use with caution): Drafting external communications, summarising complex documents, creating templates
  • Red (prohibited or requires disclosure): Final client deliverables without human verification, strategic recommendations, performance reviews, legal documents

The key is specificity. Generic guidance like “use AI responsibly” proves meaningless in practice. Concrete rules—”all client-facing documents must be reviewed and edited by a human, with AI assistance disclosed if substantial”—provide actionable boundaries.

2. Train for Human-in-the-Loop Best Practices

Simply providing AI tools without training is like distributing scalpels without medical school. Leading organisations are investing in structured training programmes that teach effective AI collaboration.

These programmes emphasise several principles:

  • Use AI as a thought partner, not a ghostwriter: Engage AI in dialogue to refine your thinking, then write the final version yourself
  • Never send AI-generated content without substantial editing: If you can’t improve the AI’s output meaningfully, you probably don’t understand the topic well enough
  • Apply the “telephone test”: If you couldn’t explain the content verbally with the same clarity, don’t send the written version
  • Favour brevity over AI-generated expansion: If AI suggests adding paragraphs to your bullet points, resist unless each addition adds genuine value

Some organisations have implemented “AI literacy” certification programmes, similar to data security training, ensuring all employees understand both capabilities and limitations.

3. Redesign Incentives to Reward Quality Over Quantity

Stopping workslop ultimately requires addressing the organisational conditions that incentivise it.

Progressive firms are shifting metrics:

  • Instead of tracking “reports completed,” measure “decisions improved” or “clarity ratings” from recipients
  • Replace requirements for lengthy updates with brief, structured formats (Amazon’s famous six-page memos, but actually written by humans)
  • Implement 360-degree feedback that specifically assesses communication quality and efficiency
  • Recognise and reward professionals who communicate effectively with fewer, better-crafted messages

One technology company experimented with a provocative policy: any email longer than 200 words required VP approval. While ultimately too restrictive, the initial trial dramatically reduced communication volume and improved clarity. The modified version—any email over 200 words must include a three-sentence summary at the top—proved sustainable.

4. Build Technical Controls and Transparency

Some organisations are implementing technical measures to create accountability:

  • Watermarking or disclosure requirements: Some enterprise AI tools now include metadata indicating AI involvement, allowing recipients to calibrate expectations
  • Usage monitoring: Analytics that identify individuals generating unusually high volumes of AI content, triggering coaching conversations
  • Quality checking tools: AI-powered systems that ironically detect AI-generated content and flag it for human review before sending

While these approaches raise legitimate privacy concerns and shouldn’t become surveillance systems, transparent implementation can help organisations understand usage patterns and identify where intervention is needed.

5. Model Alternative Behaviour from Leadership

Perhaps most critically, senior leaders must demonstrate that thoughtful, concise human communication is valued and rewarded.

When executives send brief, carefully considered emails rather than AI-generated essays, they signal priorities. When leaders openly discuss their AI use—”I used ChatGPT to research this topic, then wrote this analysis based on what I learned”—they model appropriate transparency. When promotions go to people who communicate with clarity rather than volume, the message resonates.

“I started ending important emails with a note: ‘This email was written by me without AI assistance because this decision matters,'” shared one CFO. “It sounds almost comical, but the feedback was overwhelmingly positive. People told me they noticed the difference and appreciated the care.”

The Path Forward: Will Workslop Fade or Persist?

Looking ahead, several scenarios could unfold.

The optimistic view suggests that workslop represents growing pains—an inevitable phase as organisations learn to integrate powerful new tools. As AI literacy improves, social norms against slop solidify, and tools become more sophisticated at generating genuinely useful content, the problem may naturally recede.

Some evidence supports this optimism. The Economist noted in late 2025 that organisations in their second or third year of widespread AI adoption show better usage patterns than those in their first year. Cultures develop antibodies. People learn what works and what doesn’t.

The pessimistic view holds that workslop may be symptomatic of deeper limitations in how we’re deploying generative AI. If the fundamental value proposition is “create more content with less effort,” we shouldn’t be surprised when people create more low-value content. The problem isn’t user education—it’s the mismatch between the tool’s capabilities and the actual needs of knowledge work.

This perspective suggests we need different tools entirely. Rather than AI that helps you write more, perhaps we need AI that helps you think more clearly, summarise more concisely, or communicate more precisely. Tools designed for quality rather than quantity.

The likely reality probably lies between these poles. Workslop won’t disappear entirely—it’s too easy to create and too tempting under pressure. But organisations that take it seriously as a cultural and operational challenge can substantially mitigate it. Those that don’t will find themselves drowning in a flood of plausible-sounding nonsense, watching productivity gains evaporate despite significant AI investment.

The broader question is whether the current generation of generative AI tools will prove to be genuinely transformative for knowledge work or merely another technology that seems revolutionary until organisations discover its hidden costs. Workslop may be our first clear signal that the answer is more complicated than the hype suggested.

Conclusion: Choose Clarity Over Convenience

Two years into the generative AI revolution, we’re learning an uncomfortable truth: tools that make it easier to create content don’t automatically make communication more effective. Sometimes, they make it worse.

The solution isn’t to reject AI—the technology offers genuine value when deployed thoughtfully. But we must resist the siren call of effortless output and recognise that good communication, like good thinking, requires effort. There are no shortcuts to clarity.

For leaders, the imperative is clear: establish guardrails, model best practices, and redesign systems that inadvertently reward slop. Create cultures where concision is prized and where the quality of thinking matters more than the volume of deliverables.

For individual professionals, the choice is equally stark: you can either do the cognitive work yourself and build a reputation for clear thinking, or you can outsource that work to AI and accept the professional consequences. Your colleagues will notice the difference, even if they don’t say so.

The hidden cost of AI workslop isn’t just measured in dollars or hours. It’s measured in degraded decision-making, eroded trust, and the slow corrosion of professional standards. We’re at a fork in the road: one path leads toward more thoughtful integration of AI that amplifies human judgment; the other leads toward increasingly automated mediocrity.

Which path your organisation takes isn’t determined by technology. It’s determined by choices—about what you value, what you reward, and what you’re willing to tolerate.

Choose carefully. The clarity of your communications may determine the quality of your future.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

AI

DBS Hits S$1 Billion AI Value Milestone — But Agentic AI Poses Talent Challenges for Singapore Banks

Published

on

DBS Bank achieves record S$1 billion in AI economic value for 2025, yet agentic artificial intelligence raises critical talent challenges across Singapore’s banking sector.

At precisely 8:47 a.m. on a humid November morning in Singapore’s Marina Bay financial district, a corporate treasurer at a mid-sized logistics firm receives a notification from her DBS banking app. The message, crafted by an artificial intelligence system that analyzed three years of her company’s cash flow patterns, freight payment cycles, and seasonal working capital needs, suggests restructuring S$2.3 million in short-term debt into a more tax-efficient facility—saving her firm approximately S$84,000 annually. She accepts the recommendation with a single tap. The AI executes the restructuring before her first coffee break.

This seemingly mundane interaction represents a seismic shift in Asian banking: the industrialization of intelligence at scale. For DBS Bank, Southeast Asia’s largest financial institution by assets, such moments are no longer experimental—they have become the measurable foundation of competitive advantage. In 2025, the bank achieved a landmark that few global financial institutions can match: S$1 billion in audited economic value directly attributable to artificial intelligence initiatives, a 33% increase from S$750 million in 2024, as confirmed by Nimish Panchmatia, the bank’s chief data and transformation officer.

Yet even as DBS celebrates this quantifiable triumph—publishing AI returns in its annual report with a transparency that borders on revolutionary—a more complex narrative is emerging across Singapore’s banking landscape. The rise of agentic AI, systems capable of autonomous decision-making and multi-step task execution, is forcing financial institutions to confront an uncomfortable truth: the same technologies delivering billion-dollar efficiencies are fundamentally reshaping what it means to work in banking.

The Audited Achievement: How DBS Monetizes Machine Intelligence

DBS’s S$1 billion milestone is remarkable not for its magnitude alone, but for its methodological rigor. In an industry where vague claims about “AI transformation” have become ubiquitous noise, DBS employs what Panchmatia describes as an “impact-based, transparent and auditable” control mechanism. The bank doesn’t merely estimate AI’s contribution—it proves it through A/B testing and control group analysis, treating machine learning deployments with the same statistical discipline traditionally reserved for clinical pharmaceutical trials.

This empirical approach reveals AI’s penetration across every operational layer. DBS has deployed over 1,500 AI and machine learning models across more than 370 distinct use cases, spanning customer-facing businesses and support functions. The bank’s fraud detection systems now vet 100% of technology change requests using AI-powered risk scoring, resulting in an 81% reduction in system incidents. In customer service, generative AI tools are cutting call handling times by up to 20%, boosting both productivity and satisfaction metrics.

Behind these achievements lies a decade-long strategic commitment that began in 2018, when DBS determined that the next wave of digital transformation would be data-driven. The bank invested heavily in structured data platforms, cultivated a 700-person Data Chapter of professionals, and—perhaps most significantly—fostered an organizational culture that treats experimentation not as a luxury but as operational necessity. CEO Tan Su Shan has made this explicit: “It’s not hope. It’s now. It’s already happening,” she stated at the 2025 Singapore FinTech Festival, emphasizing that AI’s contribution to revenue is no longer speculative.

The bank’s commitment to transparency extends to acknowledging trade-offs. Panchmatia cautions against the temptation to create a “micro-industry” that meticulously quantifies every penny of hoped-for value. If improvement cannot be clearly defined and measured—whether in cost reduction, revenue uplift, processing time, or risk mitigation—DBS considers that value nonexistent. This discipline has created what analysts at Klover.ai describe as a “self-reinforcing flywheel,” where demonstrated ROI justifies expanded investment, which generates more use cases, which in turn produces more measurable value.

The Agentic Shift: From Tools to Teammates

While DBS’s traditional AI achievements are impressive, the banking sector is now grappling with a more profound transformation: the emergence of agentic artificial intelligence. Unlike earlier generative AI systems that primarily assist with content creation or analysis, agentic AI can make decisions, execute tasks autonomously, and manage multi-step objectives with limited human supervision. McKinsey research suggests this represents not merely an incremental improvement but an “organization-level mindset shift and a fundamental rewiring of the way work gets done, and by whom.”

The implications are already visible across Singapore’s banking ecosystem. At Oversea-Chinese Banking Corporation (OCBC), data scientist Kelvin Chiang developed five agentic AI models that can complete in ten minutes what previously took a private banker an entire day—tasks like drafting comprehensive wealth management documents by synthesizing research reports, regulatory filings, and client preferences. Before deployment, Chiang took his team directly to the Monetary Authority of Singapore (MAS) to demonstrate safeguards and explain how staff would respond if the system “hallucinated” or generated false information.

Similarly, Sumitomo Mitsui Banking Corp. has launched a Singapore-based agentic AI startup specifically designed to accelerate automation in corporate onboarding and know-your-customer processes. The venture promises to reduce corporate account opening times from five days to two, and potentially compress loan processing from seven months to as little as five days. Mayoran Rajendra, head of SMBC’s AI transformation office, emphasizes that “100% accuracy can never be assumed,” maintaining human oversight through workflows that ensure every extracted data point remains traceable and auditable.

These systems represent more than productivity enhancements. They herald what industry analysts term “autonomous intelligence”—AI that doesn’t merely augment human decision-making but, in certain contexts, replaces it entirely. Gartner forecasts that by 2028, agentic AI will enable 15% of daily work decisions to be made autonomously, up from essentially zero in 2024. This trajectory poses fundamental questions about the future composition of banking workforces.

The Talent Paradox: Reskilling 35,000 While Competing for Specialists

Singapore’s banking sector employs approximately 35,000 professionals—a workforce now facing what could be the most significant occupational transformation since the digitization of trading floors in the 1990s. The scale of the challenge is reflected in the national response: MAS, in partnership with the Institute of Banking and Finance, has launched a comprehensive Jobs Transformation Map for the financial sector, identifying how generative AI will reshape key job roles and the upskilling required as positions are transformed and augmented by AI.

DBS alone has identified more than 12,000 employees for upskilling or reskilling initiatives since early 2025, with nearly all having commenced learning roadmaps covering AI and data competencies. The bank has simultaneously reduced approximately 4,000 temporary and contract positions over three years, though both OCBC and United Overseas Bank report no AI-related layoffs of permanent staff. This pattern suggests AI is changing job composition rather than job quantity—at least in the medium term.

Yet this transition reveals what Workday’s Global State of Skills report identifies as a “skills visibility crisis.” In Singapore, 43% of business leaders express concern about future talent shortages, while only 30% are confident their organizations possess the necessary skills for long-term success. More troubling: a mere 46% of leaders claim clear understanding of their current workforce’s skills. This uncertainty becomes acute when competing for specialized AI talent. The recent reported acquisition of Manus, a Chinese-founded agentic AI startup, by Meta for over $2 billion—as noted by Finimize—illustrates the global competition for AI expertise. Nvidia CEO Jensen Huang has observed that roughly half of the world’s AI researchers are Chinese, a reminder that talent leadership will hinge on where people can build, raise capital, and sell worldwide.

For Singapore’s banks, this creates a dual challenge. They must simultaneously retrain existing workforces in AI literacy while attracting and retaining the scarce specialists capable of building proprietary systems. OCBC’s approach is instructive: the bank is training 100 senior leaders in coaching by 2027 to enable “objective and informed discussions about technology initiatives rather than emotional debates.” Meanwhile, UOB has partnered with Accenture to accelerate generative and agentic AI adoption—a “buy versus build” strategy that provides faster capability acquisition but potentially less proprietary institutional knowledge than DBS’s home-grown approach.

The human dimension extends beyond technical skills. Laurence Liew, director of AI Innovation at AI Singapore, emphasizes that agentic AI demands higher-order capabilities: “As AI agents gain more autonomy, the human role shifts from executor to orchestrator.” This transition requires not just coding proficiency but judgment, creativity, empathy, and the ability to manage autonomous systems responsibly—qualities that resist automation precisely because they are distinctly human.

The Regulatory Framework: Balancing Innovation and Accountability

Singapore’s regulatory response to AI’s proliferation reflects a philosophy that distinguishes the city-state from more prescriptive jurisdictions. In November 2025, MAS released its consultation paper on Guidelines for AI Risk Management—a document notable for what it doesn’t do. Rather than imposing rigid rules that might stifle innovation, MAS has established proportionate, risk-based expectations that apply across all financial institutions while accommodating differences in scale, scope, and business models.

Deputy Managing Director Ho Hern Shin explained the rationale: “The proposed Guidelines on AI Risk Management provide financial institutions with clear supervisory expectations to support them in leveraging AI in their operations. These proportionate, risk-based guidelines enable responsible innovation by financial institutions that implement the relevant safeguards to address key AI-related risks.”

The guidelines emphasize governance and oversight by boards and senior management, comprehensive AI inventories that capture approved scope and purpose, and risk materiality assessments covering impact, complexity, and reliance dimensions. Significantly, MAS is considering how to hold senior executives personally accountable for AI risk management, recognizing that autonomous systems create novel governance challenges traditional frameworks struggle to address.

DBS has responded by implementing its PURE framework (Purpose, Unbiased, Responsible, Explainable) and establishing a cross-functional Responsible AI Council composed of senior leaders from legal, risk, and technology disciplines. This council oversees and approves AI use cases, ensuring adherence to both regulatory requirements and ethical standards. The bank’s commitment to a “human in the loop” philosophy means AI augments rather than replaces human judgment, particularly in sensitive functions like risk assessment and critical customer interactions.

This collaborative regulatory approach has created what practitioners describe as permission to experiment within well-defined guardrails. When OCBC presented its agentic AI tools, regulators wanted to understand thinking processes, oversight mechanisms, and escalation protocols—not to obstruct deployment but to ensure responsible implementation. This pragmatism distinguishes Singapore from jurisdictions where regulatory uncertainty has become an innovation tax.

The Regional Context: Singapore’s Competitive Position

DBS’s AI achievements must be understood within the broader competitive dynamics of Asian banking. While DBS has built a significant lead through its decade-long investment in proprietary platforms and data infrastructure, competitors are pursuing different strategies with varying degrees of success.

OCBC, which established Asia’s first dedicated AI lab in 2018, has deployed generative AI productivity tools across its 30,000-employee global workforce, reporting productivity gains of approximately 50% in piloted functions. The bank’s AI systems now make over four million daily decisions across risk management, customer service, and sales—projected to reach ten million by 2025. OCBC’s focus on “10x initiative,” which challenges every employee to deliver ten times baseline productivity, reflects an ambitious vision of collective organizational uplift through AI augmentation.

UOB’s recent partnership with Accenture signals a more accelerated adoption pathway, leveraging external expertise to compress development timelines. While this approach may yield faster deployment than DBS’s build-it-yourself philosophy, it raises questions about long-term differentiation. Analysis by Klover.ai suggests that “partner or buy strategies” can quickly acquire advanced capabilities but may generate less proprietary institutional knowledge and greater dependency on third-party vendors for core innovation.

Beyond Singapore, the regional picture is mixed. Hong Kong, Tokyo, Seoul, and Mumbai are all investing heavily in banking AI, but implementation varies widely based on regulatory environments, talent availability, and institutional risk appetites. McKinsey estimates that generative AI could add between $200 billion and $340 billion in annual value to the global banking sector—2.8% to 4.7% of total industry revenues—largely through increased productivity. The institutions capturing disproportionate shares of this value will likely be those that master not just the technology but the organizational transformation it demands.

The Ethical Dimension: AI With a Heart

Perhaps the most significant aspect of DBS’s AI strategy is its explicit framing as “AI with a heart”—a philosophy that acknowledges technology’s limitations and privileges human judgment in contexts where values, empathy, and cultural nuance matter. Panchmatia has articulated this as a shift from “user-centered AI” to “human-centered AI,” where systems actively support customer wellbeing, financial literacy, and positive societal impact rather than merely optimizing individual transactions.

This approach manifests in concrete design choices. DBS employs adaptive feedback loops that continuously refine customer insights based on behavioral responses. If a customer receives a nudge—such as an installment option for a large purchase—and chooses not to engage, that feedback adjusts future interactions. The system learns not just what customers do, but what they choose not to do, respecting autonomy while improving relevance.

The ethical stakes escalate with agentic AI’s increasing autonomy. As systems gain authority to make consequential decisions with limited oversight, questions about bias, fairness, transparency, and accountability become existential rather than peripheral. DBS’s external validation—receiving the Celent Model Risk Manager Award for AI and GenAI in 2025—suggests the bank’s governance approach is gaining industry recognition. Yet challenges persist. Gartner projects that nearly 40% of agentic AI projects will stall or be cancelled by 2027, primarily due to fragmented data and underestimated operational complexity.

The potential for AI to exacerbate social inequalities looms large. If automation primarily displaces routine cognitive tasks performed by mid-level professionals while concentrating gains among highly skilled specialists and capital owners, the technology could widen rather than narrow economic divides. Singapore’s comprehensive reskilling programs represent an attempt to democratize access to AI-augmented opportunities, but success is far from assured. As Workday observes, 52% of Singaporean business leaders cite reskilling time as a major obstacle, with 49% identifying resistance to change as a barrier.

The Path Forward: Can Singapore Maintain Its Lead?

As 2026 unfolds, Singapore’s banking sector stands at an inflection point. DBS’s S$1 billion AI value milestone demonstrates that machine intelligence can deliver measurable competitive advantage when implemented with rigor and transparency. The bank’s success reflects strategic foresight, substantial investment, cultural transformation, and—critically—the courage to publish audited results that expose both achievements and limitations.

Yet the transition to agentic AI introduces uncertainties that disciplined execution alone cannot resolve. The technology’s capacity for autonomous decision-making raises governance challenges that existing frameworks struggle to address. The competition for specialized AI talent is intensifying globally, with the world’s most innovative minds increasingly mobile and capital flowing to wherever regulatory environments and opportunities align. Singapore’s relatively small population—approximately 5.9 million—means the city-state cannot rely on domestic talent pipelines alone but must attract and retain international expertise through superior working conditions, intellectual stimulation, and quality of life.

The regional competitive landscape is also shifting. While Singapore currently enjoys a first-mover advantage in AI-enabled banking, Hong Kong, South Korea, and emerging financial centers are investing aggressively in competing capabilities. The question is whether Singapore’s collaborative regulatory approach, comprehensive reskilling programs, and established financial ecosystem can maintain differentiation as AI technologies commoditize and diffuse.

Perhaps the most profound uncertainty concerns whether the promise of AI augmentation will prove inclusive or exclusionary. If the technology primarily benefits those already privileged with access to elite education, digital literacy, and professional networks, it risks becoming another mechanism of stratification. Conversely, if thoughtfully deployed with attention to accessibility and opportunity creation, AI could democratize access to sophisticated financial services and expand economic participation.

DBS’s achievement of S$1 billion in AI economic value is undeniably impressive—a quantifiable demonstration that machine intelligence has moved from experimental novelty to operational bedrock. Yet as agentic AI systems gain autonomy and influence, Singapore’s banks face challenges that transcend technology: how to balance efficiency with employment security, innovation with accountability, competitive advantage with social cohesion. The city-state that figures out this balance first may not just maintain its lead in banking AI—it may define what responsible financial automation looks like for the rest of the world.

The corporate treasurer who accepted that AI-generated debt restructuring recommendation at 8:47 a.m. saved her firm S$84,000. But the larger question—whether the AI that enabled her productivity will ultimately create or destroy opportunities for others like her—remains stubbornly, provocatively open.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

Asia

Inside Singapore’s AI Bootcamp to Retrain 35,000 Bankers: Reshaping Asia’s Financial Future

Published

on

When Kelvin Chiang presented his team’s agentic AI models to Singapore’s Monetary Authority, he knew he was demonstrating something unprecedented. What used to consume an entire workday for a private banker—compiling wealth reports, validating sources of funds, drafting compliance documents—now takes just 10 minutes. But before Bank of Singapore could deploy these tools across its wealth management division, Chiang’s data scientists had to walk regulators through every safeguard, every failsafe, and every human oversight mechanism designed to prevent the system from “hallucinating” false information.

The regulators didn’t push back. They embraced it.

That collaborative spirit between government and industry defines Singapore’s radically different approach to the AI transformation sweeping global banking. While financial institutions in the United States and Europe announce mass layoffs—Goldman Sachs warning of more job cuts as AI takes hold—Singapore is executing the world’s most ambitious banking workforce retraining program. DBS Bank, OCBC, and United Overseas Bank are retraining all 35,000 of their domestic employees over the next two years, a government-backed initiative that represents not just a skills upgrade, but a fundamental reimagining of what it means to work in financial services.

The Revolutionary Scale of Singapore’s AI Training Initiative

The numbers tell only part of the story. Singapore’s three banking giants are investing hundreds of millions in a training infrastructure that reaches from entry-level tellers to senior executives. But unlike generic technology upskilling programs that plague many organizations, this bootcamp targets specific, measurable competencies needed to work alongside autonomous AI systems.

Violet Chung, a senior partner at McKinsey & Company, identifies what makes this initiative unique: “The government is doing something about it because they realize that this capability and this change is actually infusing potentially a lot of fear.” That acknowledgment of worker anxiety—combined with proactive solutions rather than platitudes—sets Singapore apart from Western approaches that often prioritize shareholder returns over workforce stability.

The Monetary Authority of Singapore (MAS) isn’t just cheerleading from the sidelines. Deputy Chairman Chee Hong Tat, who also serves as Minister for National Development, has made workforce resilience a regulatory expectation. The message to banks is clear: deploy AI aggressively, but ensure your people evolve with the technology. Singapore’s National Jobs Council, working through the Institute of Banking and Finance, offers banks up to 90% salary support for mid-career staff reskilling—an unprecedented level of public investment in private sector workforce development.

Understanding Agentic AI: The Technology Driving the Transformation

To grasp why 35,000 bankers need retraining, you must first understand what agentic AI does differently than the chatbots and recommendation engines that preceded it.

Traditional AI systems respond to prompts. Ask a question, get an answer. Agentic AI, by contrast, pursues goals autonomously. According to research from Deloitte, these systems can plan multi-step workflows, coordinate actions across platforms, and adapt their strategies in real-time based on changing circumstances—all without constant human intervention.

Consider OCBC’s implementation. Kenneth Zhu, the 36-year-old executive director of data science and AI, oversees a lab where 400 AI models make six million decisions every single day. These aren’t simple calculations. The models flag suspicious transactions, score credit risk, filter false positives in anti-money laundering systems, and even draft preliminary reports that once consumed hours of compliance officers’ time.

At DBS Bank, an internal AI assistant now handles more than one million prompts monthly. The bank has deployed role-specific tools that reduce call handling time by up to 20%—not by replacing customer service staff, but by handling the tedious documentation and data retrieval that used to interrupt human conversations. Customer service officers now spend their time actually serving customers, while AI manages the administrative burden.

The source of wealth verification process at Bank of Singapore exemplifies agentic AI’s potential. Relationship managers previously spent up to 10 days manually reviewing hundreds of pages of client documents—financial statements, tax notices, property valuations, corporate filings—to write compliance reports. The new SOWA (Source of Wealth Assistant) system completes this same analysis in one hour, cross-referencing Bank of Singapore’s extensive database and OCBC’s parent company records to validate information plausibility.

Bloomberg Intelligence forecasts that DBS will generate up to S$1.6 billion ($1.2 billion) in additional pretax profit through AI-derived cost savings—roughly a 17% boost. These aren’t theoretical projections. DBS CEO Tan Su Shan reports the bank already achieved S$750 million in AI-driven economic value in 2024, with expectations exceeding S$1 billion in 2026.

Inside the Bootcamp: How 35,000 Bankers Are Actually Learning AI

The phrase “AI bootcamp” might conjure images of programmers teaching SQL queries. Singapore’s program looks nothing like that.

The curriculum divides into three tiers, each calibrated to job function and AI exposure level:

Tier 1: AI Literacy for Everyone (All 35,000 employees)

  • Understanding what AI can and cannot do
  • Recognizing AI-generated content and potential hallucinations
  • Data privacy and security in AI contexts
  • Ethical considerations when deploying automated decision-making
  • Prompt engineering basics for interacting with AI assistants

Tier 2: AI Collaboration Skills (Frontline and Middle Management)

  • Working with AI co-pilots for customer service
  • Interpreting AI-generated insights and recommendations
  • Overriding AI decisions when human judgment is required
  • Monitoring AI system performance and reporting anomalies
  • Translating customer needs into AI-friendly inputs

Tier 3: AI Development and Governance (Technical Teams and Senior Leaders)

  • Model risk management frameworks
  • Building and validating AI use cases
  • Implementing responsible AI principles (fairness, explainability, accountability)
  • Regulatory compliance for AI systems
  • Strategic AI investment and ROI measurement

The Institute of Banking and Finance Singapore doesn’t just offer online modules. Through its Technology in Finance Immersion Programme, the organization partners with banks to create hands-on learning experiences. Participants work on actual banking challenges, developing practical skills rather than theoretical knowledge.

Dr. Jochen Wirtz, vice-dean of MBA programs at National University of Singapore, emphasizes the urgency: “Banks would be completely stupid now to load up on employees who they will then have to let go again in three or four years. You’re much better off freezing now, trying to retrain whatever you can.”

That philosophy explains why DBS has frozen hiring for AI-vulnerable positions while simultaneously training 13,000 existing employees—more than 10,000 of whom have already completed initial certification. Rather than the classic “hire-and-fire” cycle that characterizes American banking, Singapore pursues “freeze-and-train.”

The Human Reality: Fear, Adaptation, and Unexpected Opportunities

Not everyone welcomes their AI co-worker with open arms.

Bank tellers watching their branch traffic decline, back-office analysts seeing AI handle tasks they spent years mastering, relationship managers uncertain how to add value when machines draft perfect emails—the anxiety is real and justified. Singapore’s approach acknowledges these concerns rather than dismissing them.

Walter Theseira, associate professor of economics at Singapore University of Social Sciences, notes that banks are managing workforce transitions through “natural attrition rather than forced redundancies.” When employees retire, change roles internally, or move to other companies, banks increasingly choose not to backfill those positions. This gradual adjustment—combined with the creation of new AI-adjacent roles—softens the disruption.

The emerging job categories reveal how AI transforms rather than eliminates work:

  • AI Quality Assurance Specialists: Testing AI outputs for accuracy, bias, and regulatory compliance
  • Digital Relationship Managers: Handling complex wealth management with AI-generated insights
  • Automation Process Designers: Identifying workflows suitable for AI augmentation
  • Model Risk Officers: Ensuring AI systems operate within approved parameters
  • Customer Experience Strategists: Designing human-AI interaction patterns

UOB has given all employees access to Microsoft Copilot while deploying more than 300 AI-powered tools across operations. OCBC reports that AI-assisted processes have freed up capacity equivalent to hiring 1,000 additional staff—capacity redirected toward higher-value customer interactions and strategic initiatives rather than eliminated.

One success story circulating in Singapore’s banking community involves a former transaction processor who completed the AI training program and now leads a team designing automated fraud detection workflows. Her deep understanding of payment patterns—knowledge that seemed obsolete when AI took over transaction processing—became invaluable when combined with technical AI literacy. She didn’t lose her job to automation; she gained leverage over it.

Singapore’s Regulatory Philosophy: Partnership Over Policing

What separates Singapore’s approach from virtually every other financial center is how its regulator, the Monetary Authority of Singapore, engages with AI deployment.

In November 2025, MAS released its consultation paper on Guidelines for AI Risk Management—a document that reflects months of collaboration with banks rather than top-down dictates imposed on them. The guidelines focus on proportionate, risk-based oversight rather than prescriptive rules that could stifle innovation.

MAS Deputy Managing Director Ho Hern Shin explained the philosophy: “The proposed Guidelines on AI Risk Management provide financial institutions with clear supervisory expectations to support them in leveraging AI in their operations. These proportionate, risk-based guidelines enable responsible innovation.”

The guidelines address five critical areas:

  1. Governance and Oversight: Board and senior management responsibilities for AI risk culture
  2. AI Risk Management Systems: Clear identification processes and accurate AI inventories
  3. Risk Materiality Assessments: Evaluating AI impact based on complexity and reliance
  4. Life Cycle Controls: Managing AI from development through deployment and monitoring
  5. Capabilities and Capacity: Building organizational competency to work with AI safely

Rather than banning certain AI applications, MAS encourages banks to experiment while maintaining rigorous documentation of safeguards. When Kelvin Chiang presented his agentic AI tools, regulators wanted to understand the thinking process, the oversight mechanisms, and the escalation protocols—not to obstruct deployment, but to ensure responsible implementation.

This collaborative regulatory stance extends to funding. Through the IBF’s programs, Singapore effectively subsidizes workforce transformation, recognizing that individual banks cannot bear the full cost of societal-scale reskilling. PwC research shows organizations offering AI training report 42% higher employee engagement and 38% lower attrition in technical roles—benefits that justify public investment.

MAS Chairman Gan Kim Yong, who also serves as Deputy Prime Minister, framed the imperative at Singapore FinTech Festival: “It is important for us to understand that the job will change and it’s very hard to keep the same job relevant for a long period of time. As jobs evolve, we have to keep the people relevant.”

The ROI Case: Why Massive AI Investment Makes Business Sense

Singapore’s banks aren’t retraining 35,000 workers out of altruism. The business case for AI transformation is overwhelming—provided the workforce can leverage it.

DBS CEO Tan Su Shan described AI adoption as generating a “snowballing effect” of benefits. The bank’s 370 AI use cases, powered by more than 1,500 models, contributed S$750 million in economic value in 2024. She projects this will exceed S$1 billion in 2026, representing a measurable return on years of investment in both technology and people.

The efficiency gains manifest across every banking function:

Customer Service: AI handles routine inquiries, reducing average response time while allowing human agents to focus on complex problems requiring empathy and judgment. DBS’s upgraded Joy chatbot managed 120,000 unique conversations, cutting wait times and boosting satisfaction scores by 23%.

Risk Management: OCBC’s 400 AI models process six million daily decisions related to fraud detection, credit scoring, and compliance monitoring—work that would require thousands of additional staff and still produce inferior results due to human attention limitations.

Wealth Management: AI-powered portfolio analysis and market insights allow relationship managers at private banks to serve more clients at higher quality. What once required a team of analysts now happens in real-time, personalized to each client’s specific situation.

Operations: Back-office processing that once consumed entire departments now runs largely automated, with humans focused on exception handling and quality assurance rather than manual data entry.

According to KPMG research, organizations achieve an average 2.3x return on agentic AI investments within 13 months. Frontier firms leading AI adoption report returns of 2.84x, while laggards struggle at 0.84x—a performance gap that could determine competitive survival.

The transformation isn’t limited to cost savings. DBS now delivers 30 million hyper-personalized insights monthly to 3.5 million customers in Singapore alone, using AI to analyze transaction patterns, life events, and financial behaviors. These “nudges”—reminding customers of favorable exchange rates, suggesting timely financial products, flagging unusual spending—drive engagement and revenue while genuinely helping customers make better decisions.

Global Context: How Singapore’s Model Differs from Western Approaches

The contrast with American and European banking couldn’t be starker.

JPMorgan Chase CEO Jamie Dimon speaks enthusiastically about AI’s opportunities while the bank deploys hundreds of use cases. Yet JPMorgan analysts project global banks could eliminate up to 200,000 jobs within three to five years as AI scales. Goldman Sachs continues warning employees to expect cuts. The narrative centers on efficiency gains and shareholder value, with workforce impact treated as an unfortunate but necessary consequence.

European banks face different pressures. Strict labor protections make large-scale layoffs difficult, but they also complicate rapid workforce transformation. Banks attempt gradual transitions through attrition, but without Singapore’s comprehensive retraining infrastructure, displaced workers often struggle to find equivalent roles.

Singapore’s model succeeds through three unique factors:

1. Government-Industry Alignment The close relationship between MAS, the National Jobs Council, and major banks enables coordinated action impossible in more fragmented markets. When Singapore decides workforce resilience matters, resources flow accordingly.

2. Social Contract Expectations Singapore’s three major banks operate with implicit understanding that their banking licenses come with social responsibilities. Massive layoffs would trigger regulatory and reputational consequences, creating strong incentives for workforce investment.

3. Manageable Scale With 35,000 domestic banking employees across three major institutions, Singapore can execute comprehensive training that would be logistically impossible for American banks with hundreds of thousands of global staff.

Harvard Business Review analysis suggests Singapore’s approach, while difficult to replicate exactly, offers lessons for other nations: establish clear regulatory expectations around workforce transition, provide financial support for retraining, create industry-specific training partnerships, and measure success not just by AI deployment speed but by workforce adaptation rates.

The 2026-2028 Horizon: What Comes Next

As Singapore approaches the halfway point of its two-year retraining initiative, early results suggest the model works—but also highlight emerging challenges.

DBS has already reduced approximately 4,000 temporary and contract positions over three years, while UOB and OCBC report no AI-related layoffs of permanent staff. The banking sector is discovering that AI changes job composition more than job quantity, at least in the medium term.

The next wave of transformation will test whether current training adequately prepares employees. Gartner forecasts that by 2028, agentic AI will enable 15% of daily work decisions to be made autonomously—up from essentially zero in 2024. As AI agents gain more autonomy, the human role shifts from executor to orchestrator, requiring even higher-order skills.

MAS is already considering how to hold senior executives personally accountable for AI risk management, recognizing that autonomous systems create novel governance challenges. The proposed framework would mirror the Monetary Authority’s approach to conduct risk, where individuals bear clear responsibility for failures.

Singapore is also grappling with an unexpected challenge: Singlish, the local English creole, creates complications for AI natural language processing. Models trained on standard English struggle with Singapore’s unique linguistic patterns, requiring localized AI development—which in turn demands more sophisticated training for local AI specialists.

The broader implications extend beyond banking. If Singapore succeeds in demonstrating that massive AI deployment can coexist with workforce stability through strategic retraining, it provides a template for other industries and nations facing similar disruptions.

McKinsey estimates that AI could put $170 billion in global banking profits at risk for institutions that fail to adapt, while pioneers could gain a 4% advantage in return on tangible equity—a massive performance gap. Singapore’s banks, with their AI-literate workforce, position themselves firmly in the pioneer category.

Lessons for the Global Banking Industry

Singapore’s AI bootcamp experiment offers actionable insights for financial institutions worldwide:

Start with Culture, Not Technology: The most sophisticated AI fails if employees resist or misuse it. Comprehensive training that addresses fears and demonstrates value creates buy-in impossible to achieve through top-down mandates.

Partner with Government: Workforce transformation at this scale exceeds individual firms’ capacity. Public-private partnerships can distribute costs while ensuring industry-wide capability building.

Measure What Matters: Singapore tracks not just AI deployment metrics but workforce adaptation rates, employee satisfaction with AI tools, and the emergence of new hybrid roles. These human-centric measures predict long-term success better than pure technology KPIs.

Reimagine Rather Than Replace: The most successful AI implementations augment human capabilities rather than substituting for them. Relationship managers with AI insights outperform both pure humans and pure machines.

Invest in Adjacent Capabilities: AI literacy alone isn’t enough. Workers need complementary skills—critical thinking, emotional intelligence, creative problem-solving—that AI cannot replicate but can amplify.

Create New Career Paths: As traditional roles evolve, new opportunities in AI quality assurance, model risk management, and human-AI experience design create advancement paths for ambitious employees.

Accept Gradual Transition: Singapore’s two-year timeline, with flexibility for individual banks to move faster or slower based on their readiness, acknowledges that workforce transformation cannot be rushed without creating unnecessary disruption.

The Verdict: A Model Worth Watching

As the financial world watches Singapore’s unprecedented experiment, the stakes extend far beyond one nation’s banking sector. The question isn’t whether AI will transform banking—that transformation is already underway. The question is whether that transformation must inevitably create massive worker displacement, or whether strategic intervention can enable human adaptation at the pace of technological change.

Singapore bets on the latter possibility. By retraining all 35,000 domestic banking employees, by creating robust public-private partnerships, by developing comprehensive curricula that address both technical skills and existential anxieties, the city-state attempts to prove that the future of work doesn’t have to be a zero-sum battle between humans and machines.

Early returns suggest the model works. Banks report measurable productivity gains without mass layoffs. Employees initially resistant to AI training increasingly embrace it as they discover enhanced rather than diminished job prospects. Regulators fine-tune an approach that enables innovation while maintaining safety.

Yet challenges remain. Can retraining keep pace with accelerating AI capabilities? Will the job categories being created prove as numerous and lucrative as those being transformed? What happens to workers who cannot or will not adapt, despite comprehensive support?

These questions lack definitive answers. What Singapore demonstrates beyond doubt is that workforce transformation of this magnitude is possible—that major financial institutions can deploy cutting-edge AI aggressively while simultaneously investing in their people’s futures.

When historians eventually assess the AI revolution’s impact on work, Singapore’s banking sector bootcamp may be remembered as either a successful proof of concept that other nations and industries replicated, or as an admirable but ultimately isolated experiment that proved impossible to scale beyond a small, tightly integrated economy.

The next two years will tell us which.


Discover more from The Economy

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending

Copyright © 2025 The Economy, Inc . All rights reserved .

Discover more from The Economy

Subscribe now to keep reading and get access to the full archive.

Continue reading