Workday quietly announced 400 customer support role reductions this month, citing AI agents taking over Tier 1 support functions. This isn’t an outlier — it’s the beginning of a pattern that every SaaS company will face.
As a CTO who has built customer success and support organizations, I’m watching this closely. The implications are significant both for how we build products and how we think about the human side of our companies.
What AI Agents Are Actually Doing Well
Based on what I’m seeing across the industry:
- Password resets and account recovery
- FAQ-style questions with documented answers
- Basic troubleshooting with clear decision trees
- Ticket routing and initial categorization
- Status checks and simple data lookups
These represent probably 40-60% of Tier 1 volume at most SaaS companies. When done well, AI handles these faster and with better availability than humans.
The Uncomfortable Question: What’s Left?
If AI handles the routine work, what remains for human support?
Tier 2 becomes the new Tier 1. But Tier 2 traditionally required more expertise, more context, more judgment. You need people who:
- Understand edge cases and exceptions
- Can navigate ambiguous situations
- Build relationships with customers
- Escalate appropriately when things go wrong
The skills gap is real: A Tier 1 agent trained to follow scripts doesn’t automatically become a Tier 2 agent who exercises judgment. Workday and others will need to either retrain or replace.
Impact on Product Teams
This affects how we build products:
- Error messages matter more — if AI support needs to route from error codes, those codes need to be informative
- Documentation becomes critical — AI agents are only as good as the knowledge base they’re trained on
- Edge cases multiply — with routine cases automated, every human interaction is now an edge case
- Support data changes — we lose visibility into the “simple” cases that used to signal product friction
The Human Cost Question
I want to be direct about this: 400 people lost their jobs. The productivity gains are real, but so are the disruptions to individual lives.
As technical leaders, we have a responsibility to think about:
- How we retrain affected workers
- What career paths exist for support professionals
- Whether we’re moving too fast
I don’t have clean answers here. But pretending this is purely a business optimization conversation feels wrong.
What are others seeing? How are your companies handling the transition of support work to AI agents? How are you thinking about the people affected?
Michelle, this is one of the most important topics for product leaders to grapple with right now.
I want to add a product strategy perspective that I think is being missed in most AI-support conversations:
Support interactions are a goldmine of product insight.
Every Tier 1 ticket is a signal. “How do I reset my password?” tells you your self-service UX is failing. “I can’t find this feature” tells you your navigation is broken. “Why doesn’t this work?” often reveals bugs before they become critical.
When we automate those away with AI agents, we solve the symptom but we stop hearing the signal.
The data quality problem:
- AI agents typically classify tickets into categories
- Those categories are determined by the AI model’s training
- If the AI doesn’t have a category for “confusing UX,” you’ll never see that insight
- You see what the AI is configured to see
What I’d recommend for product teams:
- Create a “signal extraction” layer that analyzes all AI-resolved tickets for product friction
- Periodically route a random sample of AI-handled tickets to humans for pattern recognition
- Track resolution time trends — if AI tickets are getting resolved but the same questions keep coming, something is wrong
On the human side: I agree this isn’t just a business optimization. My company has started requiring any AI deployment that reduces headcount to include a retraining budget. It’s not enough, but it’s something.
The question I’m wrestling with: if AI handles the routine work, but the routine work is where junior support people learned the business, how do we build career ladders that don’t start with “already expert”?
I’ll bring the uncomfortable finance perspective here.
The math is brutal and companies will do it anyway.
Average Tier 1 support agent fully-loaded cost: $60-80K/year
AI agent platform cost per equivalent capacity: $15-25K/year
Savings per replaced agent: $35-65K/year
400 agents × $50K average savings = $20M annual cost reduction.
For a public company like Workday, that’s real earnings impact. Analysts notice. Stock responds.
But here’s what the simple math misses:
- Transition costs — retraining, severance, morale impact, productivity dips
- Quality degradation risk — if AI handles complex cases poorly, customer churn
- Hidden labor — someone still needs to train, monitor, and improve the AI
- Regulatory risk — depending on industry, AI support may have compliance implications
When I model these out properly, the real savings are usually 40-60% of the headline number. Still significant, but not the slam dunk the press releases suggest.
The question I think companies should ask:
Instead of “how many support agents can we replace with AI,” what if we asked “how can we use AI to make support a competitive advantage?”
Imagine: same headcount, but AI handles routine work so humans can focus on relationship building. Customer satisfaction goes up. Retention improves. Expansion revenue increases.
The ROI on that model might be higher than pure cost-cutting. But it’s harder to quantify in a board deck, so companies default to headcount reduction.
I’m seeing about 25% of AI investments being deferred to 2027 in my network because CFOs want better ROI frameworks. This is part of why.
I want to raise a security and privacy concern that often gets overlooked in these AI support conversations.
AI support agents have access to everything.
To be helpful, they need:
- Account information
- Transaction history
- Usage patterns
- Sometimes PII to verify identity
This creates a new attack surface:
- Prompt injection attacks — malicious users crafting inputs to manipulate AI behavior
- Data exfiltration — AI agents revealing information they shouldn’t based on clever questioning
- Social engineering at scale — attackers can test manipulation techniques against AI much faster than human agents
When I do security assessments on AI support systems, the most common vulnerability I find is inconsistent access control. The AI has access to data that no single human agent would have, because it needs to route tickets across all domains.
Real example I saw last month: A user asked an AI support agent “What was the last transaction on my account?” The AI helpfully pulled the transaction. But the user had guessed another customer’s account number, and the AI didn’t have the same verification instincts a human would have.
Recommendations for teams deploying AI support:
- Rate limit sensitive data disclosures per session
- Implement AI-specific authentication challenges for high-risk requests
- Log and monitor for unusual question patterns
- Build kill switches for immediate human takeover
The 400 Tier 1 agents who lost their jobs were also 400 pairs of eyes watching for suspicious behavior. AI agents don’t have that same intuition yet.
This isn’t anti-AI — it’s pro-thoughtful-deployment.