We Just Lost Our Most Senior Engineer to Layoffs. Our Monolith's Tribal Knowledge Went With Her

I need to tell you something that’s been keeping me up at night.

Three weeks ago, we lost Carmen—our most senior backend engineer. Twelve years with the company. Not because of performance issues. Not because she wanted to leave. She was one of 45,000+ tech workers laid off in Q1 2026 alone, part of what’s shaping up to be the worst year for tech employment since the dot-com crash.

Carmen was our institutional memory. She knew every quirk of our monolith, every hack we’d implemented under deadline pressure, every architectural decision we’d made when the team was just five people in a garage. When something broke at 2am, Carmen could diagnose it in six minutes because she’d probably written the code—or at least knew who had and why they’d made that choice.

Now she’s gone. And we’re discovering just how much of our critical systems lived only in her head.

The Impact Hit Fast

Last week we had a production incident. Something that Carmen would have resolved in 20 minutes took us six hours. We had the monitoring alerts. We had the error logs. What we didn’t have was the context. Why was this cache invalidation pattern implemented this way? Which database indices depended on this assumption? What were the three gotchas everyone just knew to avoid?

The documentation? Sparse at best. A README from 2019. Some Slack conversations that now lead to a deactivated account. Architecture diagrams that don’t match the current reality because we never had time to update them when we were shipping features every sprint.

Our sprint velocity has dropped 30%. Our incident resolution time has tripled. And we’re starting to delay customer commitments because nobody’s confident making changes to systems only Carmen fully understood.

The AI Efficiency Paradox

Here’s the part that makes me angry: Carmen was cut as part of an “AI efficiency” initiative. Leadership sold it as “AI will augment the remaining team’s productivity.” But research shows only 16% of individual workers have high AIQ—the ability to work effectively with AI tools. That number might hit 25% by end of 2026.

You know who was in that 16%? Carmen. Because she had the deep systems knowledge to prompt effectively, to evaluate AI-generated code, to know when the AI suggestion was brilliant and when it would introduce a subtle bug that wouldn’t surface until production.

We didn’t just lose headcount. We lost the person who could have helped the rest of us leverage AI effectively.

This Is Bigger Than My Team

I’m sharing this because I suspect we’re not alone. 68% of those 45,000+ layoffs were in the US. Companies are cutting senior engineers, architects, and domain experts—the people who carry institutional knowledge—often in the name of efficiency gains that aren’t materializing.

And when the next crisis hits—a security vulnerability, a major customer escalation, a regulatory compliance issue—the experts who could have handled it are gone.

We’re not just accumulating technical debt. We’re accumulating knowledge debt. And unlike technical debt, you can’t refactor your way out of losing a decade of institutional memory.

Questions I’m Wrestling With

I’m turning to this community because I need to hear from others who’ve faced this:

  1. How do you protect institutional knowledge when layoffs are happening? Is there a realistic playbook, or are we all just hoping we’re not the next Carmen?

  2. Is your company investing in documentation and knowledge transfer BEFORE cutting headcount? Or is it reactive, like us, trying to reverse-engineer tribal knowledge after people are gone?

  3. What happens when the next crisis hits and the experts are gone? Are we building products on increasingly fragile foundations of vanishing institutional knowledge?

I’m trying to be the leader my team needs right now—supporting them as they rebuild what we lost, advocating for better practices going forward. But I’m also scared. Scared that this is happening across the industry. Scared that we’re making short-term financial decisions with long-term consequences we don’t fully understand.

Has anyone found a way through this that doesn’t involve just hoping the crisis doesn’t come?


Sources for the industry data: Network World’s Q1 2026 analysis, SkillSyncer Layoffs Tracker, HN discussion on tech layoffs

Keisha, this hits way too close to home. I just lived through this exact scenario at my company.

We lost three architects in our recent round of layoffs—part of that same 45K+ wave you mentioned. One of them, Miguel, was the only person who truly understood our legacy payment reconciliation system. Twenty-year-old COBOL interfaces talking to modern microservices. The kind of system where the documentation is just a handwritten note from 2008 that says “Don’t touch the batch scheduler on Fridays.”

Miguel retired six months ago. His replacement, Sarah, was laid off three weeks ago as part of our “digital transformation efficiency” initiative.

Now we have a payment reconciliation system that processes $2M daily, and exactly zero people on the team who understand how the nightly batch process works. Not the edge cases. Not the failure modes. Not the workarounds that keep it running.

Our Emergency Response

When we realized the knowledge gap, we went into crisis mode:

Week 1-2: Emergency Documentation Sprints

  • Had Sarah create runbooks before her last day
  • Recorded her walking through the most critical processes
  • Documented every manual intervention we could think of

Week 3-4: Knowledge Transfer Sessions

  • Brought in consultants who worked on similar systems
  • Had remaining senior engineers shadow the system operations
  • Created decision trees for common failure scenarios

The Brutal Truth

Here’s what I learned: You can’t document 18 years of context in 2 weeks.

Sarah tried. She really did. But there’s a difference between documenting WHAT the system does and WHY it does it that way. The tribal knowledge—the three times it failed in production and what we learned, the customer complaints that drove specific design decisions, the regulatory requirements that changed over time—that’s all context. And context is what makes the difference between fixing an issue in 20 minutes versus 6 hours.

The Cost-Benefit Analysis Nobody Ran

Your question about whether companies are doing cost-benefit analysis on knowledge loss? In my experience: No. They’re absolutely not.

The conversation in the boardroom was:

  • “How much do we save by cutting these three roles?” → $750K/year
  • “When can we backfill with cheaper resources?” → Q3 if needed

The conversation that SHOULD have happened:

  • “What’s our risk exposure if payment reconciliation fails?” → Regulatory fines, customer trust, potential revenue loss
  • “How long will it take new people to reach Sarah’s expertise level?” → 3-5 years, realistically
  • “What’s the opportunity cost of delays to our enterprise roadmap?” → Probably more than $750K

But those questions weren’t asked. Because knowledge loss is hard to quantify until it bites you.

We’re Creating Systemic Risk

In financial services, we talk a lot about systemic risk. And I’m starting to think we’re creating it at the engineering level.

When you cut the senior engineers who understand the critical paths through legacy systems, you’re not just slowing down feature development. You’re creating single points of failure. And in a regulated industry, that’s existential risk.

What happens when:

  • A security vulnerability needs patching in a system nobody fully understands?
  • A regulatory audit asks “how does this system ensure compliance?” and we can’t explain it?
  • A major customer has an issue that requires deep system knowledge to resolve?

We’re gambling that those scenarios won’t happen before we can rebuild the knowledge. That’s not a good bet.

What I Wish We’d Done

If I could go back six months, here’s what I would have fought for:

  1. Knowledge Transfer Windows: Not 2 weeks. Minimum 3 months of overlap before planned departures.

  2. Bus Factor Audits: Before any layoff decision, identify which systems have a bus factor of 1. Make that visible to decision-makers.

  3. Documentation as a First-Class Citizen: Not something we do “when we have time.” Something that blocks releases if it’s not done.

  4. Retention Strategies for Tribal Knowledge Holders: If someone is the only expert on a critical system, that should factor into layoff decisions.

But I didn’t have the data to make that case effectively. And by the time we felt the pain, it was too late.

Keisha, You Asked About a Playbook

I don’t have a great answer. What I’m doing now is reactive:

  • Emergency pairing: Junior engineers shadowing seniors on every incident
  • Recorded walkthroughs: Loom videos of system operations for future reference
  • Weekly knowledge-sharing sessions: Each team member teaches something they know
  • Decision logs: Capturing not just what we decided, but why

But honestly? We’re still in survival mode. We’re shipping slower, our incidents take longer to resolve, and we’re one major crisis away from serious problems.

The thing that keeps me up: I know we’re not unique. This is happening across the industry. And I worry we’re building a house of cards—systems running on vanishing institutional knowledge, held together by hope and prayer that the next incident isn’t the one that breaks us.


Keisha, your team is lucky to have a leader who sees this problem clearly and is willing to talk about it. That’s the first step. But I’m not sure any of us have figured out step two yet.

This conversation needs to be happening in boardrooms, not just between engineering leaders trying to pick up the pieces.

Keisha, Luis—you’re both describing symptoms of a strategic failure. And I’m going to be blunt: This is a failure of executive leadership to understand what they’re actually cutting when they reduce headcount.

Let me share some data that should terrify every CFO and CEO reading this.

The Real Cost of “Savings”

McKinsey’s 2025 analysis of 500 engineering teams showed that teams with high technical debt take 40% longer to ship features compared to low-debt teams.

When you cut senior engineers who carry institutional knowledge, you’re not just saving salary. You’re mortgaging your team’s velocity. You’re accepting that every feature will take longer, every incident will be more expensive, every customer escalation will be more painful.

The math:

  • Salary saved: $200K/year for one senior engineer
  • Velocity impact: 30-40% slowdown for a 10-person team
  • Opportunity cost: If that team ships $2M in annual value, a 35% slowdown = $700K in lost value
  • Net result: You “saved” $200K and lost $700K

And that doesn’t even account for the revenue risk when you miss customer commitments or the reputational damage when systems fail.

The AI Narrative Is a Trojan Horse

Luis mentioned the “AI efficiency” justification. Let me add more data to that:

Only 16% of individual workers had high AIQ in 2025 (Forrester research). That number is predicted to reach just 25% in 2026. Almost nobody actually knows how to work effectively with AI despite companies betting hundreds of millions on those productivity gains materializing.

Here’s what I’m seeing at my company: We were pressured to cut headcount because “AI will make your remaining team 2x as productive.” I pushed back hard:

Me: “Show me the AIQ scores across our engineering organization. Show me the productivity data that supports this claim.”

CFO: “We don’t have that data yet, but the research is clear—”

Me: “The research shows 16% of workers can use AI effectively. Do we know which of our engineers are in that 16%?”

CFO: Silence.

We didn’t make those cuts. Not because I won the argument, but because I bought time by demanding data. Three months later, we surveyed the team: 12% reported being able to integrate AI tools effectively into their daily work.

You know who were in that 12%? The senior engineers. The ones with deep systems knowledge who could evaluate AI suggestions critically, who could prompt effectively because they understood the domain, who knew when AI was brilliant versus when it would create subtle bugs.

Cutting senior people for “AI efficiency” is cutting the people who can actually leverage AI.

The “Quiet Rehire” Problem

Recent research suggests that half of AI-driven layoffs will result in quiet rehires within 12-18 months.

Think about that cost structure:

  1. Severance package for senior engineer: $50-100K
  2. 6-12 months of reduced velocity and knowledge loss
  3. Rehiring costs: Recruiting, onboarding, ramp-up time
  4. Total cost: Easily $200K+ on top of the opportunity cost

And when you rehire, you’re not getting Carmen back. You’re getting someone new who doesn’t have that institutional knowledge. The knowledge debt compounds.

What Should Leadership Be Asking?

Before any layoff decision that affects senior technical staff, these questions should be non-negotiable:

  1. What is the bus factor for our critical systems?

    • If the answer includes any "1"s, that’s a red flag
    • Those systems are single points of failure
  2. What knowledge debt are we creating?

    • Can we quantify the cost of knowledge loss?
    • What’s our plan to rebuild that institutional memory?
  3. What is our team’s actual AIQ?

    • If we’re cutting for “AI efficiency,” do we have data?
    • Who on the team can actually leverage AI effectively today?
  4. What’s the opportunity cost?

    • How much will velocity drop?
    • Which customer commitments are at risk?
    • What revenue could we lose?
  5. What’s our knowledge transfer plan?

    • Not 2 weeks. Real transfer. 3-6 months minimum.
    • What documentation needs to exist before anyone leaves?

The Hard Truth for Engineers

Here’s something uncomfortable: We share some responsibility for this mess.

For years, we’ve celebrated “move fast and break things.” We’ve shipped features at the expense of documentation. We’ve rewarded the hero engineer who saves the day at 2am but never writes down what they learned.

We created the conditions where tribal knowledge became acceptable. Where being the only expert on a system was job security rather than a liability.

And now, when leadership makes cuts without understanding the knowledge loss, we’re paying the price.

What I’m Doing Differently

At my company, we’ve implemented some changes:

  1. Technical Debt Audits Before Headcount Decisions

    • Engineering presents: Here’s what we lose if you cut X role
    • Make the invisible visible
  2. Bus Factor as a Board-Level Metric

    • Every quarter: Report on critical systems with bus factor = 1
    • Board asked once: “What’s our truck factor?” That question changed everything.
  3. Knowledge Transfer as a KPI

    • We measure: How many people can handle incidents for each critical system?
    • We incentivize: Documentation, rotation, knowledge sharing in performance reviews
  4. AI Capability Baseline

    • Before claiming AI productivity gains, we measure actual AIQ
    • Training programs to increase that percentage
    • But it takes time—you can’t magic away the skill gap

To Keisha and Luis

Your teams are lucky to have leaders who see this clearly. But this can’t be solved at the VP and Director level alone.

This needs to be escalated. Your CFOs and CEOs need to understand: You can’t optimize away institutional knowledge and expect the same outcomes.

The companies that figure this out will have a massive competitive advantage over the next 3-5 years. The ones that don’t? They’re going to be rebuilding foundations while their competitors ship products.

We’re not building products on vanishing institutional knowledge. We’re building an entire industry on that fragile foundation. And the first major crisis is going to expose just how precarious that is.

Reading this thread from the design side, and honestly… I’m feeling all of this so deeply.

We lost Sarah two months ago. She was the designer who built our entire design system—not just the Figma components, but the thinking behind them. The principles. The patterns. The 47 conversations about button hierarchy that led to our current component structure.

And now? Every time we need to add a new component or extend an existing one, we’re playing design archaeology. Digging through old Figma files. Reading Slack threads. Trying to reverse-engineer her decision-making from the artifacts she left behind.

The Design Parallel

Michelle, you talked about the bus factor in engineering. We have the exact same problem in design.

Our design system has a bus factor of… well, it was 1. Now it’s 0. Because Sarah’s gone, and nobody else on the team was deeply involved in the foundational decisions.

What we lost:

  • The component library: Still exists in Figma
  • The documentation: Exists, but it’s the “how to use this” not the “why it works this way”
  • The WHY: Gone. Completely. Only in Sarah’s head.

And the WHY is what we need most when we’re making new decisions.

“What Would Sarah Have Done?”

That’s become our team’s running joke. Except it’s not funny.

Last week we needed to design a new notification pattern. Something that sits between a toast and a modal. We built three prototypes. Had an hour-long debate. Finally someone said, “What would Sarah have done?”

And we realized: We don’t know. We can’t know. Because the context she would have brought—the user research she ran in 2021, the customer complaints that informed the original toast design, the accessibility considerations that shaped the modal behavior—that’s all gone.

We’re making decisions in a vacuum that Sarah would have made with years of accumulated context.

This Happened to My Startup Too

I’m going to be vulnerable here: My startup failed partially because of this exact problem.

My co-founder (our technical lead) left nine months before we shut down. He took with him:

  • The architectural vision for our product
  • The context on why certain technical decisions were made
  • The relationships with our early customers who gave critical feedback
  • The understanding of which features were core versus nice-to-have

We tried to continue. Brought in contractors. Documented what we could. But the product lost its coherence. Features started contradicting each other. The technical foundation became fragile.

Customers could feel it. Investors could see it. Six months later, we were done.

I learned: Institutional knowledge isn’t just technical debt. It’s organizational memory. And when it walks out the door, you’re not just rebuilding systems—you’re rebuilding the ability to make coherent decisions.

Keisha Asked: Is This Just Engineering?

No. God, no.

This is happening in:

  • Design: Systems, brand guidelines, user research insights
  • Product: Market understanding, customer context, roadmap rationale
  • Customer Success: Account history, relationship nuances, escalation patterns
  • Operations: Process knowledge, vendor relationships, workaround techniques

Every discipline has tribal knowledge. Every team has a Sarah or Carmen whose departure creates a knowledge crater.

What I Wish We’d Done (In Both My Startup and Now)

Decision Logs, Not Just Documentation

We document HOW to use our design system. We don’t document WHY we made each decision. That’s the gap.

Michelle mentioned Architectural Decision Records (ADRs) for engineering. We need the design equivalent:

  • Component Decision Records: Why this button variant exists
  • Pattern Decision Records: Why we chose this notification approach
  • Research Archives: The user testing that informed these choices

Not just “here’s how it works” but “here’s the context that led to this decision.”

Cross-Training and Rotation

Sarah was the only person who worked on the design system full-time. That was our mistake.

What we should have done: Rotate designers from product teams through design systems work. Not as a side project—as a core responsibility. Build distributed knowledge.

Honesty About What We Don’t Know

The hardest thing right now: Admitting we don’t know Sarah’s reasoning. We’re making guesses. Educated guesses, but guesses.

I’d rather say “we don’t have that context, so here’s our new reasoning” than pretend we’re continuing Sarah’s vision when we’re really just improvising.

To Michelle’s Point About Responsibility

Michelle said we engineers share responsibility for celebrating “move fast and break things” over documentation. That hit hard.

In design, we did the same thing. We celebrated the beautiful portfolio piece. The innovative interaction. The award-winning visual design.

We didn’t celebrate the boring work of:

  • Writing decision rationales
  • Maintaining design system docs
  • Teaching others how to extend components
  • Building institutional knowledge that survives people leaving

And now we’re paying for it.

Questions I’m Wrestling With

  1. How do you document the WHY without it becoming shelf-ware that nobody reads?

  2. Is there a “design system bus factor” metric we should be tracking?

  3. When you lose the creator of a system, do you rebuild in their image or take it as an opportunity to redesign? (I keep going back and forth on this)

  4. How do you make knowledge-sharing culturally valued, not just a checkbox?

One More Thing

Keisha, you said you’re scared this is happening across the industry. Luis said we’re building a house of cards.

I think you’re both right. And I think it crosses disciplines.

We’re all experiencing the same thing: Companies optimizing for short-term cost savings without understanding the long-term knowledge debt they’re creating.

And the first team to face a crisis they can’t handle because the expert left? That’s going to be a wake-up call.

I just hope it’s not catastrophic when it happens.


Thank you for starting this conversation, Keisha. It needed to be said. And clearly, a lot of us are living through variations of this same nightmare.

I’m going to bring the uncomfortable product and business perspective to this conversation, because someone needs to say it:

This institutional knowledge crisis is destroying customer relationships and revenue. And most companies don’t realize it until it’s too late.

The Customer Impact Nobody’s Tracking

Keisha, you mentioned delaying customer commitments. Luis talked about regulatory risk. Maya described losing product coherence.

Let me tell you what that looks like from where I sit:

Six weeks ago, we committed to an enterprise customer that we’d deliver a specific integration feature by end of Q2. Big deal. Seven-figure ACV. Strategic account that could open doors to their industry vertical.

Four weeks ago, we laid off Marcus—one of two engineers who understood our integrations architecture. Part of the “efficiency” push.

Two weeks ago, the remaining engineer looked at the scope and said: “I need Marcus. I don’t know how half of this works. We’re going to miss the deadline.”

Yesterday, I had to tell that customer we’re pushing delivery by 6 months. Minimum.

This morning, they started evaluating our competitors.

The “savings” from cutting Marcus: $180K in salary.

The revenue at risk: $700K in year-one ACV, $2.1M over three years.

The strategic cost: Losing access to an entire industry vertical that this customer was going to be our reference for.

That math doesn’t work. But nobody asked those questions before the layoff decisions were made.

The Knowledge Loss → Revenue Loss Pipeline

Here’s what I’m seeing play out across multiple customer scenarios:

Stage 1: Institutional Knowledge Walks Out

  • Senior person who understands customer context, technical constraints, or product history leaves
  • Team thinks: “We’ll figure it out”

Stage 2: Customer Commitments Slip

  • Features delayed because nobody knows how to implement them safely
  • Bug fixes take 3x longer because tribal knowledge is gone
  • Customizations break because the workarounds aren’t documented

Stage 3: Customer Trust Erodes

  • “You used to be so responsive. What happened?”
  • “This used to work. Why is it broken now?”
  • “You promised this feature. Why the delay?”

Stage 4: Revenue Impact

  • Churn increases (10-15% in our case this quarter)
  • Expansion deals stall (customers won’t commit more budget when we can’t deliver)
  • Reference customers stop giving references
  • Sales cycles lengthen (prospects hear the customer complaints)

Stage 5: Competitive Vulnerability

  • Competitors pitch: “They’re in chaos. We’re stable.”
  • You can’t counter it because it’s true
  • Market positioning suffers

Nobody’s connecting those dots back to the layoffs. But they should be.

The CFO Questions Nobody Asked

Michelle listed the questions leadership should ask before layoffs. From a product and revenue perspective, add these:

  1. Which customer commitments depend on the people we’re cutting?

    • Roadmap items, bug fixes, ongoing support
    • What revenue is tied to those commitments?
  2. How much institutional knowledge about our customers is walking out?

    • Account history, relationship context, unwritten agreements
    • Who’s going to know why we built that feature for this specific customer?
  3. What’s our customer churn risk over the next 12 months?

    • If velocity drops and quality suffers, how many customers will leave?
    • What’s the CLV (customer lifetime value) of those customers?
  4. What’s the impact on our sales pipeline?

    • Can we still position ourselves as stable and reliable?
    • What happens when references dry up?
  5. What’s the opportunity cost on our product roadmap?

    • Which strategic initiatives get delayed or killed?
    • What market opportunities do we miss?

In my experience: These questions aren’t being asked. Finance sees the cost savings. They don’t see the revenue risk because it’s not immediate. It cascades over 6-18 months.

By the time you feel it, the damage is done.

The Institutional Knowledge → Product Coherence Connection

Maya talked about losing product coherence when her co-founder left. I’m seeing that same pattern now.

When you lose senior people who carry the product vision and customer context, your roadmap starts to fragment:

  • New PMs build features that contradict existing patterns (because they don’t know the history)
  • Engineering solves problems in ways that create future debt (because they don’t know the constraints)
  • Design creates experiences that feel disconnected (because they don’t know the principles)

Customers notice. They might not articulate it as “your product has lost coherence.” But they feel it. And it affects their willingness to invest more in your platform.

What Should Product Leaders Be Doing?

I’m trying to figure this out in real-time. Here’s what I’m experimenting with:

1. Customer Knowledge Mapping

  • For each strategic account: Who on our team knows their history, context, relationship?
  • If that person leaves, what’s the handoff plan?
  • This should inform layoff decisions (doesn’t always, but should)

2. Product Decision Logs

  • Why did we build this feature?
  • What customer problem does it solve?
  • What alternatives did we consider and reject?
  • Document the WHY, not just the WHAT

3. Roadmap Risk Assessment

  • For each major initiative: What’s our bus factor?
  • Which depend on specific people’s institutional knowledge?
  • How do we derisk that?

4. Customer Communication Playbook

  • When we lose key people and velocity drops: Be transparent
  • Set realistic expectations rather than over-promising
  • Explain the investment in long-term stability (even if it’s reactive)

5. Quantify Revenue Risk for Finance

  • Make visible: This layoff could delay X features, risking Y revenue
  • Force the conversation: Is the savings worth the revenue risk?
  • Sometimes yes. But make it a conscious decision, not a blind one.

To Michelle’s Point: We Need Executive Alignment

Michelle said this can’t be solved at the VP/Director level alone. She’s absolutely right.

I’ve started presenting “revenue risk from knowledge loss” in our exec meetings. Making visible what’s usually invisible. It’s uncomfortable, but it’s necessary.

Example: “If we cut these two roles, here are the customer commitments at risk. Here’s the potential churn. Here’s the revenue impact. Do we still want to make this decision?”

Sometimes the answer is still yes—the company needs the cost savings to survive. But at least we’re making the decision with open eyes.

The Question That Keeps Me Up

Keisha asked: “Are we building products on vanishing institutional knowledge?”

From a product perspective, I think the answer is yes. And the downstream effects are:

  • Eroding customer trust
  • Declining product quality
  • Fragmented roadmap execution
  • Competitive vulnerability
  • Revenue risk that’s not being tracked

The scary part? Most of this is invisible until it’s too late. By the time churn spikes and deals stall, the institutional knowledge is long gone and impossible to rebuild.

One Last Thought

Maya said she hopes the wake-up call isn’t catastrophic. I’m worried it will be.

Because here’s what I think happens: Companies cut headcount → Knowledge walks out → Velocity drops → Customer trust erodes → Revenue declines → More pressure to cut costs → Cut more headcount → Repeat.

It’s a death spiral. And I don’t think most companies recognize they’re in it until they’re way too far down.


Thanks for starting this thread, Keisha. This is the conversation we need to be having—across engineering, product, design, and leadership. Because this isn’t just a technical problem. It’s a business problem. And it’s happening right now.