55,000 Jobs Cut Citing AI, But Only 11% Have Agents in Production — The AI-Washing Disconnect

Here’s a statistic that should concern every engineering leader: in 2025, companies cut 55,000 jobs explicitly citing AI as the reason. That’s 12 times more than just two years prior.

Now here’s the uncomfortable follow-up: only 11% of organizations actually have AI agents running in production. 38% are piloting, 30% are “exploring,” but the vast majority haven’t deployed anything real.

So what’s going on?

The “AI-Washing” Phenomenon

TechCrunch coined the term “AI-washing” to describe what’s happening: companies using AI as justification for decisions that would have happened anyway.

The pattern:

  1. Company needs to cut costs (post-pandemic over-hiring, market conditions, etc.)
  2. AI provides a forward-looking, innovation-positive narrative
  3. Layoffs become “transformation” rather than “retrenchment”
  4. Leadership looks strategic instead of reactive

According to Forrester, most companies that cut workers citing AI “don’t have AI ready to fill those roles.” The workers are gone, but the AI replacement hasn’t arrived.

Why This Matters for Your Team

If you’re leading an engineering org, this trend creates several challenges:

For hiring:

  • “AI headcount” is easier to approve than “engineering headcount”
  • Finance is asking: “Can AI do this instead?”
  • You’re potentially competing against a hypothetical AI replacement

For retention:

  • Your best people read the same headlines you do
  • They’re asking themselves: “Am I next?”
  • Talent flows to companies perceived as growth-mode, not cut-mode

For planning:

  • Roadmaps are being built around AI capabilities that don’t exist yet
  • Teams are understaffed based on assumed AI productivity that hasn’t materialized
  • You’re expected to deliver more with less based on vibes

The Data We Should Be Using Instead

Rather than making staffing decisions on AI potential, I’d love to see more companies measure:

  1. Actual productivity metrics — not self-reported, not vendor marketing
  2. Quality trends — are defect rates improving or degrading?
  3. Time-to-production — is code actually shipping faster?
  4. Rework rates — how much AI-generated code gets rewritten?

Without this data, we’re making multi-million dollar workforce decisions on faith.

Questions for Discussion

  1. Are you seeing “AI-washing” at your org or companies you know?
  2. How are you pushing back on headcount decisions based on AI assumptions?
  3. What data do you wish leadership would look at before making AI-related cuts?

The gap between AI narrative and AI reality is creating real pain for real people. I think we need to call it out.

The “AI-washing” framing is spot on. I’ve been calling it “automation theater” in my head, but your term is better.

What frustrates me most: the people making these decisions often don’t understand what AI can and can’t do.

Last quarter, I was in a meeting where a VP said, “We can cut that whole data team — ChatGPT can write SQL now.” I had to explain:

  • Writing SQL ≠ understanding the business logic
  • Someone needs to validate outputs against reality
  • The data model doesn’t document itself
  • ChatGPT hallucinates table names that don’t exist

We kept the team, but only after an embarrassing amount of education. How many orgs don’t have someone willing to push back?

The Compounding Problem

What worries me is the second-order effect. If companies:

  1. Cut workers based on AI potential
  2. Realize AI can’t actually do those jobs
  3. Scramble to rehire

…they won’t get the same talent back. Those people will have moved on, and they’ll remember the betrayal.

You don’t get to say “AI can do your job” in January and then send a LinkedIn recruiter message in September.

My Practical Pushback

When AI comes up in staffing discussions, I now ask:

  • “Show me the pilot data from our org, not vendor case studies”
  • “What’s the timeline to production deployment? Let’s staff to that.”
  • “Who maintains the AI systems? That’s headcount too.”

Usually this surfaces that the AI strategy is more aspirational than operational.

Adding some data rigor to this conversation:

The 11% figure for AI agents in production comes from Deloitte’s research. But even that number deserves scrutiny.

What counts as “in production”?

I’ve seen companies claim AI is “in production” when they have:

  • A chatbot handling FAQ routing (not replacement, just routing)
  • Code completion tools (Copilot, etc.) installed on developer machines
  • An internal prototype that three people use

That’s not the same as “AI agents doing work humans used to do.” The bar for claiming AI transformation is remarkably low.

The measurement problem @eng_director_luis raised is critical

From my work on ML systems, I can tell you: most companies don’t have the instrumentation to actually measure AI productivity impact.

To properly measure, you’d need:

  1. Baseline productivity metrics from before AI tools
  2. Controlled comparison (some teams with AI, some without)
  3. Quality metrics alongside speed metrics
  4. Long-term tracking (not just initial novelty period)

Almost nobody has this. So decisions are being made on:

  • Vendor claims
  • Self-reported surveys
  • Executive gut feel

That’s not data-driven decision making. That’s vibes.

The honest answer

We genuinely don’t know yet whether AI will replace significant portions of the white-collar workforce. The technology is real, but the deployment is immature.

Making 55,000 job decisions based on immature technology is reckless. Some of those cuts will prove justified. Many won’t. And we’re about to find out which is which.

This thread is validating a lot of concerns I’ve had privately.

One angle I want to add: the infrastructure gap.

Even if AI could do the work, many companies can’t deploy it because:

  1. Data isn’t ready — AI needs clean, accessible data. Most enterprise data is siloed, inconsistent, and poorly documented.

  2. Systems aren’t integrated — AI agents need API access to actually do things. Legacy systems often don’t have APIs.

  3. Governance isn’t in place — Who approves what the AI does? What are the audit requirements? Most orgs haven’t figured this out.

  4. Security isn’t ready — AI agents need credentials, access controls, monitoring. The security team is already overwhelmed.

I’ve seen companies announce AI transformation initiatives with timelines that assume all of this is solved. Spoiler: it’s not.

The hidden cost

The infrastructure work to make AI useful? That’s often 2-3x the cost of the AI itself. And it requires… engineers. The same engineers you just laid off.

The irony is thick.