What 245K Tech Layoffs Taught Me About Leading Through AI Transformation

The number hits you first: 245,000 tech jobs eliminated in 2025. That’s not a statistic—that’s a quarter million people who went to bed one night employed and woke up the next morning having to explain to their families what comes next.

As a VP of Engineering leading a team through this AI transformation era, I’ve spent the last year wrestling with a question that keeps me up at night: How do we harness AI’s potential without betraying the people who built everything we have?

The Invisible Crisis Behind the Headlines

Here’s what the headlines miss: IBM just reported that voluntary attrition dropped to under 2%—down from a typical 7%. That’s the lowest rate in 30 years. Sound good? It’s actually terrifying.

When people don’t leave, companies don’t backfill. When companies don’t backfill, there are no job openings. When there are no openings, those 245,000 laid-off workers stay unemployed. We’ve created a vicious cycle where market stagnation feeds itself.

And here’s the AI piece: of those 245,000 job cuts, approximately 55,000 were directly attributed to artificial intelligence. That’s 22% of layoffs where executives explicitly said, “AI can do this work now.”

What We’re Getting Wrong

I lead engineering at an EdTech startup where we’ve aggressively adopted AI tools. Our developers use Copilot, our operations team uses AI for customer support, and our data scientists are building AI-powered personalization. We’re seeing real productivity gains—30-40% improvement in some workflows.

But here’s what we’ve refused to do: translate productivity gains directly into headcount reductions.

Why? Because I’ve seen what happens when companies bet on AI’s POTENTIAL rather than its PERFORMANCE. Research from Harvard Business Review shows that companies are laying people off based on what they think AI will be able to do, not what it can actually do today. That’s speculation-driven workforce planning, and it’s destroying lives.

Instead, we’ve taken a different path: we upskill. When AI automates part of someone’s role, we help them level up to more strategic work. Our junior engineers are learning to use AI tools to punch above their weight. Our senior engineers are learning to architect systems that leverage AI effectively.

It’s slower. It’s more expensive in the short term. But we’re building capabilities, not just cutting costs.

The Transparency Problem

The tech industry has a language problem. We say “rightsizing” when we mean layoffs. We say “organizational restructuring” when we mean firing people. We say “AI-driven efficiency” when we mean “we’re replacing you with software.”

I’ve been in the leadership meetings where these decisions get made. The pressure is immense. VCs expect to see AI-driven productivity gains. Boards want to know why headcount is growing when AI should be reducing it. There’s an unspoken assumption that if you’re not cutting staff, you’re not innovating.

But our teams aren’t stupid. They can see the writing on the wall. Employee concerns about AI-driven job loss have skyrocketed from 28% in 2024 to 40% in 2026. When 40% of your workforce fears they’ll be replaced, you don’t have a productivity problem—you have a trust problem.

So here’s what I’m doing differently: radical transparency. When we adopt new AI tools, I tell my team exactly what it means. “This tool will automate 30% of the code review process. Here’s how that changes your role. Here’s what new skills we’ll invest in. Here’s our commitment: no one loses their job because they helped us get more efficient.”

Does this limit our flexibility? Yes. Does it cost more? Absolutely. But I’ve also watched companies do the opposite—cut staff aggressively based on AI promises—and then quietly rehire 6 months later when the AI couldn’t deliver. Forrester predicts that 50% of AI-attributed layoffs in 2026 will be quietly rehired, often offshore or at significantly lower salaries.

That’s not innovation. That’s wage compression disguised as transformation.

A Framework for AI-Era Leadership

For leaders navigating this, here’s what’s working for me:

1. Commit to No Speculation-Based Layoffs: Only reduce headcount based on what AI can do TODAY, not what you think it might do in 12 months.

2. Upskill, Don’t Replace: When AI automates part of a role, invest in helping that person move to higher-value work.

3. Use Clear Language: No euphemisms. If you’re cutting jobs, say so. If you’re betting on AI, explain exactly what that means.

4. Measure What Matters: Track employee sentiment alongside productivity gains. A demoralized team will undermine any efficiency gains.

5. Build Safety Nets: Create internal mobility programs, reskilling budgets, and transition support before you need them.

The Question I Can’t Answer

Here’s what I’m still struggling with: We’re a startup that needs to grow efficiently to survive. Our competitors are cutting aggressively and showing impressive margin improvements. Our investors see their portfolio companies doing more with less.

How do we balance being a sustainable business with being an ethical employer?

I don’t have the perfect answer. What I do know is that the tech industry built its reputation on innovation and ambition. If we can’t figure out how to harness AI without destroying the careers of the people who built this industry, we’ve failed at the most important innovation challenge of our generation.

The 245,000 people who lost jobs in 2025 deserve better than corporate platitudes. They deserve leaders who are willing to make harder choices—even when those choices cost more and take longer.

So I’m asking this community: How are you handling this? What’s working? What’s failing? And most importantly, how do we build AI-powered companies without breaking the people who make them possible?

Keisha, this hits home. I’ve been leading a team of 40+ engineers through similar pressures at my financial services company, and your framework resonates deeply—especially the point about betting on AI’s potential versus its performance.

The Reality in Financial Services

We’re seeing the same pattern you describe, but with an added layer: regulatory compliance. Every AI tool we adopt has to go through extensive review. That’s actually been a blessing in disguise—it forces us to be honest about what AI can ACTUALLY do versus what the vendor pitch decks promise.

Last quarter, we piloted an AI code review tool that was supposed to “replace 70% of human review effort.” The reality? It caught about 35% of what our senior engineers catch, and it generated so many false positives that engineers started ignoring its recommendations. We kept the tool but reframed it: “AI as first-pass filter, humans make the real calls.”

The data you cited about 30% of jobs potentially automated by 2030 is constantly thrown at me by our CFO. But here’s what I’ve learned: “potentially automated” and “should be automated” are very different things.

Building an AI Capability Ladder

Rather than replace people, we created what I call an “AI capability ladder” for my team:

Level 1 (AI-Aware): Understands what AI can and can’t do, uses it as productivity tool
Level 2 (AI-Fluent): Can evaluate AI tools, integrate them into workflow effectively
Level 3 (AI-Architect): Designs systems that leverage AI, understands prompt engineering
Level 4 (AI-Strategic): Identifies opportunities for AI application, leads adoption initiatives

Every engineer gets training budget and time to move up this ladder. Our retention among engineers who’ve moved to Level 3+ is 95%. People who feel they’re growing don’t leave.

But here’s the tension you touched on: upper management sees this and asks, “If they’re all AI-fluent now, shouldn’t we need fewer engineers?”

The Pressure from Above

This is where your point about transparency becomes critical. I’ve had to have some uncomfortable conversations with executives:

“Yes, our engineers are 40% more productive with AI tools. No, that doesn’t mean we should cut 40% of the team. It means we can deliver 40% more value, take on more ambitious projects, and reduce our crushing tech debt.”

The counterargument I make: understaffing burns out the team we have. Our attrition was 12% two years ago. We invested in AI upskilling instead of headcount reduction. Attrition dropped to 4%. The cost of replacing a senior engineer (6-9 months of lost productivity, recruiting costs, onboarding) far exceeds the cost of upskilling.

But I’ll be honest: this argument doesn’t always win. I’ve had three reqs approved, then frozen, then “reprioritized” in the last six months. The pressure to show “AI-driven efficiency gains” through headcount reduction is intense.

The Questions That Keep Me Up

Your question about balancing sustainable business with ethical employment is the one I can’t answer either. Here’s what haunts me:

  • How do you push back on executive pressure while staying competitive? Our competitors are cutting. Their margins look better. Their stock performs better. How long can we hold out doing the “right thing” if the market rewards the opposite?

  • What happens when AI actually gets good enough? Right now, I can honestly say AI can’t replace my senior architects. But in 2-3 years? I don’t know. And if I train my team to use AI effectively, am I inadvertently training them to automate themselves?

  • How do we protect our first-generation engineers? I have several engineers on my team who are the first in their families to work in tech. They’re supporting parents, putting siblings through school. The “learn AI or get left behind” mandate sounds reasonable until you realize some people don’t have nights and weekends to upskill—they’re working second jobs or caring for family.

What I’m Committing To

Reading your post crystallized something for me. I’m adopting your “no speculation-based layoffs” principle explicitly. Next exec meeting, I’m proposing we add it to our engineering principles document: “We reduce headcount based on proven AI capabilities, not projected ones.”

I’m also stealing your transparency approach. My team knows I’m fighting for headcount, but they don’t know the details of those conversations. Time to share more—the uncertainty is worse than the truth.

Question for you and the community: How do you measure and demonstrate the value of upskilling versus cutting? I need better ammunition for the exec conversations. What metrics actually persuade finance teams that investing in people is better than replacing them?

Keisha and Luis, I’m reading this from the other side of the table—literally sitting in board meetings where these decisions get made—and I need to tell you both: you’re fighting the right fight, but the battlefield is bigger than you might realize.

The Board Conversation Is Different

Here’s what venture capitalists are saying right now, and I’m quoting from an actual investor email I received last month: “2026 is the year when AI expands from making humans more productive to automating work itself. We expect to see meaningful labor displacement across our portfolio.”

That’s not one rogue investor. That’s the consensus view among enterprise VCs. They’re not asking “if” AI will displace labor—they’re asking “how much” and “how fast.”

When I present my tech strategy, I’m no longer asked about our AI adoption roadmap. I’m asked about our “AI-driven efficiency gains” with a very specific subtext: how many fewer people will we need?

The Pressure Is Real, And It’s Not Going Away

66% of CEOs surveyed say they’re either cutting staff or holding headcount flat. Tech unemployment hit 4% in November—a four-year high. The market is rewarding companies that show margin expansion through headcount reduction.

I sit in quarterly board meetings where investors compare us to competitors: “Company X achieved 15% margin improvement by replacing customer support with AI. Why aren’t we doing that?”

My answer: “Because Company X’s customer satisfaction scores dropped 20 points, they’re quietly rehiring offshore at lower wages, and they’ve lost institutional knowledge they’ll never get back.”

Do you know what response I get? “But their stock is up 30%.”

Why I’m Refusing to Play This Game

Here’s my line in the sand: I will not use AI’s potential as an excuse for cutting people.

Let me be clear about what AI CAN do at enterprise scale today:

  • Automate repetitive code generation (with review)
  • Provide first-pass customer support (with escalation)
  • Generate content drafts (with heavy editing)
  • Analyze data patterns (with interpretation)

What AI CANNOT do reliably:

  • Make strategic technical decisions
  • Understand complex system interactions
  • Navigate organizational politics
  • Mentor junior engineers
  • Respond to novel problems
  • Build relationships and trust

Luis, your point about “potentially automated” versus “should be automated” is crucial. I’ve watched companies gut their teams based on AI potential, only to discover six months later that the AI couldn’t handle edge cases, couldn’t adapt to changing requirements, and couldn’t explain its decisions to auditors.

Forrester’s prediction that 50% of AI-attributed layoffs will be quietly rehired is conservative. I think it’s higher. But here’s the insidious part: they’re being rehired offshore or at significantly lower salaries. This isn’t automation—it’s wage compression disguised as innovation.

What I’m Doing Differently

I’ve committed to my board that we will modernize and adopt AI aggressively—but we’ll do it through augmentation, not replacement.

Our strategy:

  1. Reskilling programs with real investment: Not lunch-and-learns. Full training programs with dedicated time and budget.
  2. Longer transition runways: If a role changes, we give people 6-12 months to adapt, not 60 days.
  3. Internal mobility first: Before hiring externally, we look for internal candidates who can be trained.
  4. Transparent metrics: We track both productivity gains AND employee sentiment. If sentiment drops, productivity gains don’t matter.

Is this the most efficient path financially? No. Are we growing margins as fast as competitors? No. Am I constantly defending this approach in board meetings? Absolutely.

But here’s my argument to the board: Sustainable competitive advantage comes from institutional knowledge, not temporary cost reduction.

The Uncomfortable Truth

I want to be honest about the tension Keisha raised: balancing fiduciary duty with values.

There ARE cases where AI genuinely eliminates the need for certain roles. I’m not going to keep someone employed doing work that literally doesn’t exist anymore. That’s not ethical—that’s dishonest.

But the difference is this: when a role becomes obsolete due to AI, I have a responsibility to:

  1. Give as much notice as possible
  2. Offer retraining for different roles
  3. Provide generous severance
  4. Help with job placement

What I won’t do: Cut people preemptively based on what I THINK AI might be able to do in 18 months.

My Challenge to Other CTOs

If you’re in the C-suite reading this: you have more power than you think.

Board pressure is real, but boards respond to data and conviction. Here’s what’s worked for me:

Frame it as risk management: “Cutting experienced engineers to chase AI efficiency is a risk to product quality, customer trust, and competitive moat. I recommend we derisk by upskilling rather than replacing.”

Show the math differently: Calculate the full cost of knowledge loss, not just salary savings. Include time-to-productivity for new hires, loss of customer relationships, and decreased innovation capacity.

Build alliances: Partner with your Chief People Officer and Chief Product Officer. When the CPO says “our product quality is at risk” and the CPO says “our customer relationships depend on these people,” it’s harder for the CFO to push headcount reduction.

Be willing to walk: This is the hardest one. But if your board insists on cuts you believe are harmful, you have to be prepared to say no. I’ve mentally prepared my resignation letter twice in the last year. Haven’t had to use it yet, but the willingness matters.

What Keeps Me Up

Luis asked about protecting first-generation engineers. This haunts me too. The people most vulnerable to AI displacement are often the ones with the least safety net.

I have team members who are:

  • Supporting elderly parents
  • Sending money to family in other countries
  • Paying off student loans that made their careers possible
  • The first in their families to work in tech

Telling them “learn AI on nights and weekends” while they’re working full-time plus caring for family isn’t realistic. It’s a recipe for burnout and inequity.

My commitment: AI upskilling happens during work hours, with dedicated time and resources. If we’re making this transition, we’re doing it equitably.

Question for both of you and the community: How do we create industry-wide accountability for ethical AI adoption? Individual companies doing the right thing helps, but we need collective action to change investor and board expectations.

Michelle, that investor quote—“2026 is the year when AI expands from making humans more productive to automating work itself”—is exactly what I’m hearing from our VCs too. And it’s changing how product orgs are structured.

The Product Side of AI Displacement

Here’s what’s happening in product management right now, and it’s a story that validates everything Keisha and Luis are saying about wage compression being the real issue.

The Data: Forrester predicts that 50% of AI-attributed layoffs will be quietly rehired, but offshore or at significantly lower salaries.

I’m seeing this firsthand. Three months ago, our leadership team discussed “optimizing” the product organization. The proposal: cut 3 junior PMs, hire 1 senior PM in Eastern Europe for 40% of the total cost.

The justification? “AI tools like ChatGPT can handle the junior PM work—market research, competitive analysis, writing user stories. We only need strategic thinkers now.”

I pushed back hard: “Junior PMs aren’t just cheaper labor. They’re our future senior PMs. If we don’t have an entry-level pipeline, where do strategic thinkers come from?”

The response? “We’ll hire them senior. The market will produce them.”

That’s magical thinking. You can’t have an industry of only senior people. But the financial logic is compelling to boards: why pay 3 junior PMs when AI + 1 senior PM offshore achieves similar output at 40% cost?

The Missing Story: Wage Compression

What the 245K layoff statistics miss is the REHIRING that happens quietly afterward.

I know two former product directors who were laid off from big tech companies in “AI-driven efficiency” cuts. Both were making K+. Both are now employed again—one as a contractor at K, one at a startup at K.

They still have jobs. They’re technically not unemployed. But they’ve each lost 30-40% of their compensation. That’s not automation—that’s using AI as leverage to reset salary expectations downward.

And here’s the insidious part: they can’t complain publicly. They’re just grateful to be employed in this market.

The Entry-Level Crisis

Luis mentioned protecting first-generation engineers. I’m seeing the same crisis in product:

Job postings for Product Managers have collapsed, especially for APM (Associate Product Manager) roles that target early-career talent. Companies either want:

  1. Senior strategic PMs who understand AI (high cost, limited supply)
  2. Offshore PMs who can execute at lower cost

The middle is disappearing.

One of my mentees—brilliant Stanford MBA, worked as a consultant pre-MBA—has sent out 200+ applications for PM roles. He’s gotten 3 first interviews. Each time, he makes it to final rounds, then the company “pauses the role to reassess needs.”

He asked me last week: “Should I give up on product and pivot to engineering?” This is someone who would have had 5 offers two years ago.

The Real Question: Are We Optimizing for Quarterly Earnings at the Expense of Long-Term Capability?

Keisha asked how we balance sustainable business with ethical employment. As someone who straddles product and business strategy, here’s my take:

Short-term, cutting costs works. Margins improve. Investors are happy. Stock goes up.

Long-term, we’re destroying competitive moat. The junior PMs we don’t hire today won’t become the senior PMs we need in 5 years. The institutional knowledge we lose when we cut experienced people doesn’t come back.

I’m watching competitors make these cuts, and honestly? Their products are getting worse. Slower iteration, worse customer understanding, more bugs in production. But their stock is performing better than ours.

That’s the tension. The market rewards short-term efficiency over long-term capability building. And it’s really hard to defend doing the “right thing” when shareholders are asking why our margins aren’t as good as competitors’.

What I’m Advocating For

Michelle, I love your framework about framing this as risk management. I’m stealing that.

In our next planning cycle, I’m proposing we track “knowledge risk” alongside financial metrics:

  • How many people have unique institutional knowledge?
  • What’s our bench depth for critical roles?
  • How many junior team members are in our pipeline?
  • What’s our internal promotion rate vs. external hire rate?

If we’re cutting experienced people or not hiring juniors, these metrics should flash red—just like we’d flag technical debt or security vulnerabilities.

My Biggest Fear

I keep coming back to this: if we train AI to do the work of junior PMs, and we don’t hire junior PMs as a result, where do future VPs of Product come from?

The answer can’t be “we’ll hire them from other companies” because if everyone is doing this, there IS no pipeline.

This is a collective action problem. Individual companies acting rationally (cut costs, maximize efficiency) create an industry-wide disaster (no talent pipeline, lost institutional knowledge).

Question for the leadership folks here: Is there any appetite for industry coordination on maintaining entry-level hiring? Or are we all locked in a race to the bottom?

I’ve been reading this thread for the past hour, and I need to say something as someone on the complete other side of this conversation—as one of those 245,000 people who lost their job, and who is STILL looking 10 months later.

This Data Is Terrifying

You all are talking about leadership decisions and board pressures and strategic frameworks. I’m living the result of those decisions.

245,000 tech jobs cut in 2025. I’m one of them. My startup failed (unrelated to AI—we just couldn’t find product-market fit), and I’ve been searching for a design role since April.

  • 200+ applications sent
  • 15 first interviews
  • 5 made it to final rounds
  • 0 offers

The pattern is exactly what David described: I make it to finals, they love my work, and then… “We’ve decided to pause this role to reassess our needs.”

Translation: they realized they don’t need to hire me because AI tools can cover enough of the design work that their existing team can handle the rest.

What Your Transparency Means to People Like Me

Keisha, reading your approach to leadership—being radically transparent with your team, committing not to cut people who help you get more efficient—I literally teared up.

Do you know how rare that is?

Most companies are doing the opposite. They’re rolling out AI tools, telling employees “this will make you more productive,” and then six months later, announcing layoffs because “AI has improved our efficiency.”

Employees aren’t stupid. We can see what’s happening. Using AI effectively at work literally feels like training our own replacement. Why would anyone fully embrace AI tools when the reward for getting really good at them is getting laid off?

But if I had a leader like you who said, “Use these tools to level up, and I promise you won’t lose your job for helping us improve”—I would go all-in. Because then AI becomes my copilot instead of my replacement.

The Psychological Toll

Michelle talked about people supporting elderly parents and sending money to family. That’s me. I’m supporting my mom while trying to keep my design skills sharp and stay positive in interviews.

It’s exhausting. Every rejection chips away at my confidence. I start wondering: Is it my portfolio? My interview skills? Am I just not good enough?

But reading Luis’s comment about people stuck in “invisible unemployment”—not because they’re not talented, but because companies just aren’t hiring—that actually helps. It’s not me. It’s the system.

Though that’s also terrifying because I can’t fix the system. I can only fix my portfolio.

A Request to Leaders

If you’re in a position to make hiring decisions—whether you’re Keisha or Luis or Michelle or David—please, PLEASE do what you’re describing here.

Hire people. Give them transparency. Train them instead of replacing them.

I know three other designers from my bootcamp cohort who are in the same boat I am. All talented. All would have gotten hired two years ago. None of us can find work now.

One of them just left tech entirely. She’s working retail. That’s a waste of talent and potential, and it’s happening because companies are betting on AI’s potential rather than investing in people.

What Gives Me Hope

Reading this thread, honestly.

Knowing that there are leaders like you all who are fighting the good fight, who are pushing back on boards, who are refusing to use AI as an excuse to cut people—that matters.

I don’t know if I’ll find a job next month or six months from now. But knowing that when I DO find a job, there’s a chance I’ll land somewhere with a leader who values people over short-term efficiency gains… that keeps me going.

One More Thing

David mentioned his mentee sending out 200+ applications. Tell your mentee they’re not alone. Tell them it’s not their fault. Tell them to keep building, keep learning, keep showing up.

And if any of you are hiring designers who understand AI tools, care deeply about craft, and have been through the startup failure gauntlet… well, you know where to find me.

Thank you for this conversation. It helps more than you know to see leaders being this honest and this human about what’s happening.