90,524 Tech Workers Laid Off in Q1 2026 (214 Companies at 963/Day)—But Marc Andreessen Says AI Is Just the 'Silver Bullet Excuse'. Are We Replacing Jobs or Reorganizing?

I need to share something that’s been keeping me up at night, and I’m curious if other leaders are grappling with this too.

The headline: Q1 2026 saw 214 tech companies lay off 90,524 people—that’s 963 people per day losing their jobs. 20.4% of these layoffs were explicitly attributed to AI and automation. Yet at the same time, AI/ML job postings surged 163% from 2024 to 2025, with AI Engineer now the fastest-growing job title in the US.

Then Marc Andreessen drops this bomb: AI layoffs are a “farce”—companies are 50-75% overstaffed, and AI is just the “silver bullet excuse” to clean house. His exact words: “AI literally until December was not actually good enough to do any of the jobs that they’re actually cutting.”

So which is it? Are we genuinely replacing roles with AI capability, or are we using AI as narrative cover for cuts we wanted to make anyway?

The Numbers That Don’t Add Up

The data is contradictory in a way that’s hard to ignore:

  • ~78,557 tech layoffs in Q1 2026 (up from 29,845 in Q1 2025)
  • Nearly 50% attributed to AI/automation (around 37,638 jobs)
  • Oracle: 30,000 people. Block: 10,000→6,000. Meta: 15,000.

But simultaneously:

  • 163% growth in AI/ML job postings
  • 56% wage premium for AI roles over comparable non-AI positions
  • 72% of employers say they can’t find the AI skills they need
  • 3.2:1 demand-to-supply ratio for qualified AI engineers

We’re cutting jobs and desperately hiring for AI roles at the same time. That’s not simple replacement—that’s reorganization.

The Pattern I’m Seeing

Here’s what’s actually happening at my company and from what I’m hearing from peers:

Who’s getting cut:

  • Customer support and service roles
  • Content creation and copywriting
  • Entry-level positions across functions
  • Mid-tier data analysts and QA testers

Who we’re hiring:

  • AI/ML Engineers (at 56% premium)
  • Prompt Engineers and AI Operations specialists
  • MLOps and AI Infrastructure engineers
  • AI Governance and Ethics specialists

We’re not replacing specific roles with AI. We’re restructuring entire functions and calling it “AI efficiency.”

Board Pressure & The Uncomfortable Questions

My board asked me six weeks ago: “What’s our AI workforce strategy?”

That’s when I realized we’re all facing the same pressure. Boards read the headlines. They see competitors announcing AI-driven efficiency. They want to know: are we behind?

But here are the questions I’m struggling with:

  1. If we eliminate a role claiming “AI can do this now”—what metrics prove it actually worked?

    • Do we measure AI resolution rate? Customer satisfaction? Actual cost vs. projected savings?
    • What happens if the AI handles 80% of volume but only 60% of complexity?
  2. What if we cut too early and have to rehire in 6 months?

    • I’ve heard of companies cutting customer onboarding teams, then quietly rehiring 40% as “AI trainers”
    • That’s not efficiency—that’s organizational whiplash
  3. The re-employment reality:

  4. The skills gap nobody talks about:

    • The domain experts we’re cutting (fraud detection, compliance, customer success) have ZERO overlap with the AI roles we’re hiring (Python, TensorFlow, MLOps)
    • We’re not retraining people—we’re swapping entirely different skill sets

My Honest Take (And I Want Yours)

I think the truth is more complex than either narrative:

Some of this is genuine AI replacement. Fraud detection, document processing, certain types of data analysis—AI is legitimately better and faster. Those job cuts are real and probably permanent.

Some of this is pandemic overhiring cleanup. Andreessen’s not wrong that many companies overhired in 2020-2022. We needed a forcing function to right-size. AI gives us that narrative.

And some of this is AI-washing. Using “AI efficiency” as a convenient shortcut to avoid the harder conversation about what we’re actually optimizing for: shareholder value, short-term profitability, competitive positioning.

The problem is we’re not being honest about which is which.

What I’m Asking My Leadership Team

Before we make any more “AI-driven” workforce decisions, I’m requiring answers to these questions:

  1. What’s the AI capability today vs. what we’re betting on in 18 months?
  2. What are the dependencies? (Model improvement, integration complexity, customer acceptance, competitive execution)
  3. What breaks if we’re wrong? (Customer experience, team morale, knowledge loss, rehiring costs)
  4. Are we eliminating roles or restructuring functions? (Be honest about which)
  5. Who’s accountable if the AI can’t actually do the job?

For other CTOs and engineering leaders here: Are you facing similar board pressure? How are you thinking about accountability for AI-driven workforce decisions? And honestly—are we replacing roles with AI, or are we using AI as the excuse to make cuts we wanted anyway?

I don’t have this figured out. But I think we owe it to our teams (and ourselves) to be more honest about what’s actually happening.

Michelle, I’m living this exact situation at my Fortune 500 financial services company right now, and your question about “replacement vs. reorganization” hits home.

We’re 18 months into what leadership calls “AI transformation,” and I can give you three very different patterns I’m seeing across our 40+ engineer organization:

Pattern 1: Legitimate Replacement

Our fraud detection team: 35 people → 20 people. The AI genuinely handles ~65% of cases better than humans. We’ve measured:

  • 47% reduction in false positives
  • 23% faster resolution time
  • 89% accuracy vs. 82% human baseline

This is real AI replacement. Those 15 people aren’t coming back. The remaining 20 handle the complex cases AI escalates.

Pattern 2: Premature Elimination (The Messy One)

Customer onboarding team: Cut by 60% in Q3 2025. Six months later, we quietly rehired 40% of them as “AI trainers” and “AI operations specialists.”

What actually happened:

  • AI worked great in demos
  • Failed spectacularly with edge cases (international clients, complex account types, regulatory exceptions)
  • Customer satisfaction tanked from 78% to 52%
  • Had to bring humans back to “supervise” the AI

This wasn’t efficiency. This was organizational whiplash. And yes, the people we rehired are the exact same people doing the exact same work, just with new titles and working alongside AI instead of independently.

Pattern 3: Scope Expansion Disguised as Efficiency

Customer success team: 30 people → 30 people, but now handling 65 accounts per person instead of 30.

Leadership says “AI augmentation enabled 2x capacity!” Reality: people are working 50-60 hour weeks, using AI to generate responses faster, but still need human judgment for every interaction. We didn’t replace anyone—we just doubled their workload and called it AI productivity.

Retention on this team: 62% (down from 94% before “AI augmentation”).

The Accountability Framework We Implemented

After the onboarding disaster, I pushed hard for this before ANY future AI-driven workforce changes:

Before eliminating a role:

  1. What’s the AI resolution rate? (Not accuracy—actual end-to-end resolution without human intervention)
  2. What’s customer satisfaction impact? (We measure NPS at 30/60/90 days)
  3. What’s the actual cost savings vs. projected? (Including rehiring, training, morale hit)

6-month checkpoint:

  1. Did we have to rehire? If yes, why?
  2. What broke that we didn’t anticipate?
  3. What’s the knowledge loss cost? (Institutional knowledge we can’t recreate)

Key metric we track now: “Rehire rate” as a lagging indicator of premature AI optimism.

The Skills Gap Is Real (And Nobody Wants to Talk About It)

Your point about domain experts vs. Python/TensorFlow hit me hard.

Our fraud detection experts: 10-15 years in financial crime, deep regulatory knowledge, instinct built from seeing thousands of cases.

Our new AI engineers: fresh CS grads with MLOps skills, zero financial services experience.

There is ZERO overlap. We’re not retraining people. We’re swapping entirely different skill sets and hoping the AI bridges the domain knowledge gap.

Spoiler: it doesn’t. Yet.

The Brutal Truth

After 18 months, here’s what I believe:

Andreessen’s right about the excuse. At least 50% of our “AI-driven” cuts would have happened anyway. We were overstaffed post-pandemic. AI gave leadership a narrative that sounds better than “we overhired and now we’re fixing it.”

But the remaining 50% is real. Some jobs are genuinely better done by AI. And some restructuring is genuinely enabled by AI augmentation.

The problem: We’re optimizing for Q1 2026 cost savings while calling it AI capability. And when the board asks “what’s our AI workforce strategy,” nobody wants to say “we’re cutting costs and using AI as the justification.”

That would be honest. But it wouldn’t sound strategic.


Michelle, your five questions are exactly right. I’d add one more:

6. What’s our plan when the AI fails at scale? Because it will. And when it does, will we have the institutional knowledge left to recover?

Michelle and Luis, this conversation is hitting on something I’ve been trying to articulate for months: the human cost that nobody’s measuring.

Everyone’s focused on cost savings and productivity gains. But there’s a people dimension to this AI reorganization that we’re systematically ignoring.

The Disproportionate Impact on Entry-Level and Diverse Talent

At our EdTech startup, I track hiring pipeline data obsessively. Here’s what I’m seeing in 2026 vs. 2024:

Entry-level roles in our pipeline:

  • Software Engineer I positions: down 40%
  • Junior product manager openings: down 52%
  • Associate data analyst roles: down 61%

Bootcamp and non-traditional backgrounds:

  • Bootcamp graduate placement rate: 75% → 40%
  • Career switchers hired: 28 in 2024 → 11 in 2025
  • Internship programs: 3 paused “until AI strategy is clear”

We’re choking off the talent pipeline at the entry point.

The Diversity Dimension Nobody Wants to Discuss

Here’s the part that keeps me up at night:

Roles being cut: Customer support (60% women, 40% people of color at our company), content creation, QA, data entry

New “AI roles” being created: Require advanced degrees (MS/PhD in CS, ML, Stats), systematically excluding the diverse pipelines we’ve spent a decade building

I looked at our last 15 “AI role” hires:

  • 13 have advanced degrees from top-tier universities
  • 2 are women
  • 1 is Black (me, and I’m not in an AI role)
  • 0 came from bootcamps or non-traditional backgrounds

We’re using AI to restructure ourselves back into the homogeneous tech workforce of 2010. And calling it innovation.

Accountability to Career Pipelines

Michelle, you asked about accountability. Here’s the question I’m asking leadership:

If we eliminate entry-level customer success roles today because “AI can handle tier-1 support,” where do our 2028 senior customer success managers come from?

The typical career path:

  • Entry-level support (1-2 years): learn product, customer pain points, troubleshooting
  • Mid-level support (2-3 years): handle escalations, mentor juniors, identify patterns
  • Senior/Lead (2-3 years): strategic customer relationships, product feedback loops
  • Manager (5-7 years total): lead teams, inform product strategy

We’re cutting the first 1-2 years and expecting to hire directly into year 5-7 roles. The math doesn’t work. The pipeline is broken.

What I’m Seeing: A Live Experiment

At my company, I convinced leadership to run a 6-month pilot:

Team A (aggressive AI adoption):

  • Eliminated 5 junior engineers
  • Gave AI coding assistants to remaining 8 engineers
  • Measured: velocity, deployment frequency, bug rates

Team B (selective augmentation):

  • Kept all 13 engineers (8 mid-level, 5 junior)
  • AI tools available but not mandated
  • Same measurement framework

Results after 6 months:

Team A won the first 60 days:

  • 34% faster feature delivery
  • 28% more deployments
  • Cost savings: $425K (5 junior salaries)

Team B is winning the 180-day horizon:

  • Technical debt ratio: 0.23 (Team A: 0.74)
  • Production incidents: 12 (Team A: 39)
  • Knowledge retention: high (Team A: 3 engineers quit, citing “no learning opportunities”)
  • Team satisfaction: 87% (Team A: 54%)

The 3.2x technical debt increase in Team A is the hidden cost. They shipped faster in Q1. They’re paying the price in Q2-Q3 with bugs, rewrites, and attrition.

The Question Leadership Doesn’t Want to Answer

Are we optimizing for:

  • Q1 2026 cost savings? (Cut juniors, show immediate ROI)
  • 2028-2030 talent pipeline? (Invest in development, build sustainable teams)

Because we can’t have both. And right now, every company I talk to is choosing the first and pretending they’re doing the second.

Luis’s “rehire rate” metric is brilliant. I’d add:

7. What’s the career pipeline impact? Track:

  • Entry-level hiring trends
  • Time-to-senior for remaining juniors
  • Knowledge transfer metrics (are seniors mentoring or just reviewing AI code?)
  • Diversity of “AI roles” vs. “traditional roles”

Michelle, your honesty about not having this figured out is exactly what we need more of.

Because the alternative is leaders saying “AI is transforming our workforce!” while quietly destroying the 5-7 year career pipelines that built our industry in the first place.