I need to share something that’s been keeping me up at night, and I’m curious if other leaders are grappling with this too.
The headline: Q1 2026 saw 214 tech companies lay off 90,524 people—that’s 963 people per day losing their jobs. 20.4% of these layoffs were explicitly attributed to AI and automation. Yet at the same time, AI/ML job postings surged 163% from 2024 to 2025, with AI Engineer now the fastest-growing job title in the US.
Then Marc Andreessen drops this bomb: AI layoffs are a “farce”—companies are 50-75% overstaffed, and AI is just the “silver bullet excuse” to clean house. His exact words: “AI literally until December was not actually good enough to do any of the jobs that they’re actually cutting.”
So which is it? Are we genuinely replacing roles with AI capability, or are we using AI as narrative cover for cuts we wanted to make anyway?
The Numbers That Don’t Add Up
The data is contradictory in a way that’s hard to ignore:
- ~78,557 tech layoffs in Q1 2026 (up from 29,845 in Q1 2025)
- Nearly 50% attributed to AI/automation (around 37,638 jobs)
- Oracle: 30,000 people. Block: 10,000→6,000. Meta: 15,000.
But simultaneously:
- 163% growth in AI/ML job postings
- 56% wage premium for AI roles over comparable non-AI positions
- 72% of employers say they can’t find the AI skills they need
- 3.2:1 demand-to-supply ratio for qualified AI engineers
We’re cutting jobs and desperately hiring for AI roles at the same time. That’s not simple replacement—that’s reorganization.
The Pattern I’m Seeing
Here’s what’s actually happening at my company and from what I’m hearing from peers:
Who’s getting cut:
- Customer support and service roles
- Content creation and copywriting
- Entry-level positions across functions
- Mid-tier data analysts and QA testers
Who we’re hiring:
- AI/ML Engineers (at 56% premium)
- Prompt Engineers and AI Operations specialists
- MLOps and AI Infrastructure engineers
- AI Governance and Ethics specialists
We’re not replacing specific roles with AI. We’re restructuring entire functions and calling it “AI efficiency.”
Board Pressure & The Uncomfortable Questions
My board asked me six weeks ago: “What’s our AI workforce strategy?”
That’s when I realized we’re all facing the same pressure. Boards read the headlines. They see competitors announcing AI-driven efficiency. They want to know: are we behind?
But here are the questions I’m struggling with:
-
If we eliminate a role claiming “AI can do this now”—what metrics prove it actually worked?
- Do we measure AI resolution rate? Customer satisfaction? Actual cost vs. projected savings?
- What happens if the AI handles 80% of volume but only 60% of complexity?
-
What if we cut too early and have to rehire in 6 months?
- I’ve heard of companies cutting customer onboarding teams, then quietly rehiring 40% as “AI trainers”
- That’s not efficiency—that’s organizational whiplash
-
The re-employment reality:
- Median time for laid-off tech workers to find new jobs: 4.7 months in 2026, up from 3.2 months in 2024
- What happens to the people we “replaced” with AI that isn’t actually ready?
-
The skills gap nobody talks about:
- The domain experts we’re cutting (fraud detection, compliance, customer success) have ZERO overlap with the AI roles we’re hiring (Python, TensorFlow, MLOps)
- We’re not retraining people—we’re swapping entirely different skill sets
My Honest Take (And I Want Yours)
I think the truth is more complex than either narrative:
Some of this is genuine AI replacement. Fraud detection, document processing, certain types of data analysis—AI is legitimately better and faster. Those job cuts are real and probably permanent.
Some of this is pandemic overhiring cleanup. Andreessen’s not wrong that many companies overhired in 2020-2022. We needed a forcing function to right-size. AI gives us that narrative.
And some of this is AI-washing. Using “AI efficiency” as a convenient shortcut to avoid the harder conversation about what we’re actually optimizing for: shareholder value, short-term profitability, competitive positioning.
The problem is we’re not being honest about which is which.
What I’m Asking My Leadership Team
Before we make any more “AI-driven” workforce decisions, I’m requiring answers to these questions:
- What’s the AI capability today vs. what we’re betting on in 18 months?
- What are the dependencies? (Model improvement, integration complexity, customer acceptance, competitive execution)
- What breaks if we’re wrong? (Customer experience, team morale, knowledge loss, rehiring costs)
- Are we eliminating roles or restructuring functions? (Be honest about which)
- Who’s accountable if the AI can’t actually do the job?
For other CTOs and engineering leaders here: Are you facing similar board pressure? How are you thinking about accountability for AI-driven workforce decisions? And honestly—are we replacing roles with AI, or are we using AI as the excuse to make cuts we wanted anyway?
I don’t have this figured out. But I think we owe it to our teams (and ourselves) to be more honest about what’s actually happening.