Accenture cut 11,000 employees in September 2025 with an explicit justification I haven’t been able to stop thinking about: They “couldn’t reskill on AI” fast enough.
As VP Product at a Series B startup facing investor pressure to “show AI leverage,” I need to ask: Is AI fluency now table stakes for employment? Or is this an unfair standard that’s becoming a class divider?
The Reskilling Question Nobody’s Answering
When Accenture says 11,000 people “couldn’t reskill on AI,” what does that actually mean?
- Did they refuse to try?
- Did they try but fail to meet an arbitrary timeline?
- Were their roles fundamentally incompatible with AI-augmented versions?
- Were they given adequate time, training, and support?
We don’t know. And that opacity is the problem.
“Couldn’t reskill” could mean:
- 6-week self-directed Coursera course, no support
- 18-month structured program with mentorship that genuinely didn’t work
- Predetermined targets dressed up as “skill assessment”
Without transparency, we can’t differentiate genuine inability from unrealistic timelines or pretextual criteria.
The Pattern I’m Seeing
Block: 4,000 jobs “due to AI automation capability”
Meta: Potential 15,000 cuts to fund AI investment
Accenture: 11,000 who “couldn’t reskill on AI”
The common thread: Reskilling is offered in theory, eliminated in practice.
Cloud Migration vs AI Transition
When cloud migration happened in 2010s, companies:
- Hired consultants and trainers
- Ran structured 12-24 month programs
- Gave teams transition period with dual-stack support
- Invested heavily in workforce transformation
AI transition in 2020s:
- “Reskill or leave” as default expectation
- Individual burden to self-train
- 3-6 month timelines (vs 12-24 for cloud)
- Minimal company investment in transformation
Why the difference?
I suspect: Cloud migration was infrastructure change affecting company capability. AI transition is framed as individual productivity, so burden falls on workers.
The Leadership Responsibility Question
If AI transformation is strategic imperative, shouldn’t company invest in workforce transformation?
When we deployed new CRM at our startup, we didn’t say: “Learn Salesforce on your own time or we’ll fire you.”
We:
- Provided training budget
- Allocated learning time during work hours
- Hired consultants to accelerate adoption
- Measured success over months, not weeks
Why isn’t AI adoption treated the same way?
The Counterargument (Being Honest)
Some roles ARE genuinely obsolete. AI doesn’t augment them—it replaces them.
Example: If AI writing tools can generate marketing copy at 90% quality of human writers in 10% of the time, what’s the reskilling path for copywriters?
“Learn to prompt AI” isn’t a full-time job. That’s not reskilling—that’s acknowledgment the role no longer exists.
But: How do we differentiate roles that are:
- Augmented by AI (humans + AI > either alone)
- Partially automated (fewer humans needed, different composition)
- Fully replaced (role genuinely obsolete)
Companies aren’t being transparent about which bucket different roles fall into.
The Class Divider I’m Worried About
AI fluency is correlating with:
- Educational privilege (access to training resources)
- Age (younger workers grew up with AI tools)
- Tech sector exposure (some industries slower to adopt)
- English language (most AI tools optimized for English)
This creates permanent two-tier workforce:
- Tier 1: AI-adjacent roles, growing, well-compensated
- Tier 2: AI-replaceable roles, declining, increasingly precarious
And we’re not being honest about which tier different roles occupy.
What Are Fair Expectations?
Here’s what I’m struggling with as a leader:
Our investors expect us to:
- “Show AI leverage in workforce planning”
- Reduce headcount while maintaining/increasing output
- Hire fewer, more “AI-fluent” people
My team expects:
- Clear expectations about AI skills needed
- Training and support to develop those skills
- Reasonable timelines for proficiency
- Honesty about whether their roles are safe
I can’t reconcile these expectations.
If I’m honest about roles at risk, people leave preemptively (death spiral).
If I’m not honest, I’m setting people up for failure (unethical).
What Would Responsible Reskilling Look Like?
Based on our experience and peer company analysis:
Minimum standards:
- 12-18 month learning period with structured support
- $2K+ per employee in training investment
- Protected learning time (not “do this on top of your job”)
- Individual assessment with coaching, not binary pass/fail
- Transparency about which roles are augmented vs replaced
After 18 months with genuine support, if someone can’t reach baseline AI fluency, that’s probably role mismatch—not inability.
But Accenture cut 11,000 in September 2025. When did their “reskilling” program start? What did it involve?
I’m betting it was 6-week Coursera subscription, not 18-month structured program.
Questions for the Community
For leaders: How are you handling AI reskilling expectations? What timeline and investment are you committing to?
For ICs: What training and support would actually help vs feel performative?
For everyone: Is AI fluency now table stakes, or are we using it as pretextual firing criterion?
I don’t have answers. I’m trying to navigate investor pressure for “AI efficiency” while treating our team ethically.
Those goals might be incompatible. And that terrifies me.
Sources: