The Silent Productivity Killer: Why Half Your Engineering Time Disappears Into Code Archaeology

I’m going to be brutally honest about something that killed my startup.

We were 9 months into building our B2B SaaS product. The first 3 months? Lightning fast. Features were shipping weekly, the prototype felt magical, early customers were excited. Then… everything slowed to a crawl.

By month 6, simple changes that used to take a day were taking a week. Engineers were saying things like “I need to understand how this component works first” or “I’m afraid to touch this code because it might break everything.” We called it “being cautious.” What we were really doing was code archaeology — digging through poorly documented decisions made under deadline pressure, trying to understand why things were built a certain way before we could change them.

The 50% Problem Nobody Talks About

Recent research from Sourcery and Stack Overflow quantifies what we felt intuitively: teams with medium to high technical debt spend nearly 50% more time on bug fixing and understanding existing code compared to teams with clean codebases. The average developer now spends only 30-40% of their time on new feature development.

Let me repeat that: less than half your engineering capacity goes to building new things.

The rest? Debugging workarounds, untangling dependencies, fixing bugs that exist because of previous fixes, and my personal favorite — staring at code thinking “what were they trying to do here?” (Spoiler: “they” was usually me, 3 months ago.)

Why This Happens (And Why We All Know Better)

We all know technical debt is bad. We all know we shouldn’t skip tests, documentation, or proper architecture. Yet we all accumulate debt anyway. Why?

Because the pressure is always to ship NOW, and the pain always comes LATER.

When a PM says “we need this feature for the demo on Friday” and you’re on Wednesday, you make it work. You copy-paste that component instead of abstracting it properly. You add that boolean flag instead of refactoring the state machine. You write the inline style instead of updating the design system.

Each decision saves you 2 hours today and costs your team 20 hours over the next 6 months.

The Cognitive Load Tax

Here’s what surprised me most about tech debt: it’s not just about bad code being harder to change (though it is). It’s about how technical debt increases cognitive load for EVERY change.

When your codebase is clean and well-architected:

  • Naming is consistent, so you know where to find things
  • Patterns are repeated, so you can predict behavior
  • Tests give you confidence to refactor
  • Documentation exists where complexity is unavoidable

When debt accumulates:

  • Every file has a different style and structure
  • Similar functionality is implemented 3 different ways
  • Tests are flaky or nonexistent, so changes are risky
  • The “why” behind decisions is lost to time

This cognitive load compounds. New engineers take longer to onboard. Senior engineers get frustrated and leave. Velocity decreases even as headcount increases.

Measurement Matters: The Technical Debt Ratio

If you can’t measure it, you can’t manage it. The Technical Debt Ratio (TDR) is one useful metric: the ratio of time spent on fixing/maintaining code vs. developing new features.

Industry benchmarks:

  • TDR below 5-10%: Healthy velocity, sustainable pace
  • TDR above 20%: Systemic issues requiring strategic intervention

Other signals to track:

  • Time to onboard new engineers (debt makes ramp-up slower)
  • Time-to-fix trends (are bugs getting harder to resolve?)
  • Developer-reported friction (qualitative feedback matters too!)
  • Code churn rate (how often do we have to touch old code?)

The AI Paradox: Accelerating Debt Creation?

Here’s a twist for 2026: AI coding assistants now write about 41% of all code and save developers 30-60% of time on routine tasks. That sounds amazing!

But Google’s 2024 DORA report also shows that code churn is expected to double in 2026, and delivery stability has decreased 7.2%.

Are we creating technical debt faster because AI makes it easier to write code without thinking about architecture? Are we shipping more code that needs to be understood and maintained later?

I’m genuinely curious about this. AI helps me move fast, but am I just digging a deeper hole?

What I Wish I’d Known

At my startup, by the time we realized debt was the problem, we didn’t have runway left to fix it. We tried a “rewrite sprint” — terrible idea. We considered throwing it all away and starting over — even worse idea.

What I’d do differently:

  1. Track TDR from day one — make debt visible
  2. Budget 20% of every sprint for paying down debt — compound interest works both ways
  3. Define “done” to include tests and documentation — debt prevention, not just debt payoff
  4. Make architecture decisions explicit and recorded — future-you will thank you
  5. Resist the heroic one-off fix — systemic problems need systemic solutions

Discussion Questions

I’d love to hear from folks who’ve managed this successfully:

  • What metrics do you track to make technical debt visible to non-technical stakeholders?
  • How do you balance feature velocity with code health when the pressure is always to ship faster?
  • How are you handling the AI code generation paradox — faster coding but potentially faster debt accumulation?
  • What frameworks or processes help you prevent debt rather than just paying it down?

For those of you in legacy systems (looking at you, fintech and banking folks) — how do you even begin to tackle decades of accumulated debt?


Maya Rodriguez is a Design Systems Lead who learned more from her failed startup than from any success. She believes diverse perspectives make better products, and that measuring problems is the first step to solving them.

Maya, this hits close to home. Thanks for sharing so honestly about your startup experience — those lessons are incredibly valuable.

In financial services, we’re living this reality at massive scale. The B+ annual maintenance costs for legacy banking systems you mentioned? That’s not just a statistic — it’s our quarterly budget discussions. We have COBOL codebases from the 1980s intertwined with microservices from last year, all held together by what I call “archaeological documentation” (comments that say “don’t touch this, it breaks payroll” with no explanation of why).

Our Modernization Journey

Last year, we finally got executive buy-in for a major modernization effort targeting our payment processing stack. The business case was compelling: our TDR had crept above 35% — we were spending more than a third of engineering time just keeping the lights on.

The results after 18 months:

  • 35% reduction in operating costs (infrastructure, incident response, manual processes)
  • TDR dropped to 18% (still not ideal, but trending right)
  • Time-to-market for payment features decreased 60%
  • Engineer satisfaction scores up 28 points

That last metric was unexpected but crucial. Senior engineers were leaving because working in the legacy codebase felt like “digging ditches,” as one exit interview put it. Making progress on tech debt became a retention strategy.

The Compliance Trap

Here’s what makes fintech tech debt even more expensive: regulatory requirements make every change risky and time-consuming.

When you’re dealing with:

  • SOX compliance requiring audit trails for code changes
  • PCI-DSS mandating specific security controls
  • State and federal banking regulations varying by jurisdiction
  • Real-time fraud detection that can’t afford downtime

…then “move fast and break things” isn’t an option. Every change needs documentation, testing, security review, and compliance sign-off. Technical debt doesn’t just slow development — it multiplies the cost of required compliance work.

Example: A simple API endpoint change that would take 2 days in a clean codebase took us 3 weeks because:

  • 2 days to understand the spaghetti dependencies
  • 3 days to implement the change safely
  • 4 days for security review (because the code path touched PII)
  • 5 days for compliance documentation and approvals

What’s Working for Us

After a lot of trial and error, here’s what’s made a difference:

1. Explicit Tech Debt Sprints (20% time allocation)

Every sprint, 20% of story points are reserved for debt paydown. Not “if we have time” — it’s a committed line item. Product and engineering collaborate to prioritize which debt items have highest business impact.

We track debt items just like features: with business value, effort estimates, and impact metrics. This made it easier to get PM and executive buy-in.

2. Metrics That Speak Business Language

We track:

  • TDR (engineering health metric)
  • Time-to-fix trends (are incidents getting harder to resolve?)
  • Developer-reported friction (anonymous quarterly survey)
  • Cost per transaction (business metric affected by technical efficiency)
  • Feature delivery predictability (can we hit our commitments?)

Translating TDR into “cost per transaction” was the game-changer for executive conversations. When we could say “reducing TDR by 10% will save M annually in infrastructure costs,” budget discussions got a lot easier.

3. Architecture Decision Records (ADRs)

We started documenting every significant architectural decision in lightweight ADRs. Template:

  • Context: What’s the situation?
  • Decision: What did we decide?
  • Consequences: What are the trade-offs?

This has been invaluable for “code archaeology.” Instead of guessing why something was built a certain way, we can read the actual reasoning from 2 years ago.

The Question I’m Still Struggling With

@maya_builds, you asked about getting executive buy-in for paying down debt. Here’s what I’m still trying to solve:

How do you maintain that buy-in when new leadership arrives?

We had strong support from our CTO and CPO for the modernization effort. But our CTO left 6 months ago, and the new leadership is pushing hard for feature velocity. The 20% debt allocation is now being questioned as “why aren’t we going faster?”

The business case is still valid — we’re already seeing ROI — but the pressure to defer debt paydown is intense. Anyone else dealt with this? How do you institutionalize debt management so it survives leadership changes?


For folks in legacy systems: Start somewhere, even if it’s small. We began with just our payment API layer. Pick the highest-pain area, measure before and after, and use those results to fund the next phase.

I appreciate this conversation, but I want to push back a bit on the “50% problem” framing. Not because the data isn’t real — it is — but because context matters enormously.

Not All Debt Is Created Equal

There’s a big difference between:

  1. Intentional debt: “We’re shipping this MVP fast to validate PMF, and we know we’ll need to refactor the auth system later”
  2. Accidental decay: “Nobody documented this, tests are flaky, and we’re afraid to touch it”

Early-stage startups should accumulate some debt. If you’re pre-PMF and spending 20% of your time on tech debt paydown, you might be optimizing for the wrong thing. The goal is to learn fast, not to build perfect systems for a product that might pivot next quarter.

@maya_builds, I wonder if your startup’s issue wasn’t tech debt itself, but rather not tracking it or planning to pay it back. That’s the real killer — invisible, unnamed debt that compounds silently until it suffocates you.

Our EdTech Scaling Story

When I joined our company 3 years ago as Director, we had 25 engineers and a TDR around 30%. Classic “successful startup that grew too fast” scenario. Codebase was held together with duct tape and hope.

Today, we’re 80+ engineers, and our TDR is down to 12%. We’re delivering features faster now with 80 people than we were with 25.

How? Not by stopping everything to “pay down debt.” That’s a recipe for organizational whiplash. Instead:

1. We Made Debt Visible and Categorized

Borrowed @eng_director_luis’s team’s approach (hey Luis!) and started tracking tech debt items like product features, but with additional categorization:

  • Category 1: Existential risk (security, compliance, data integrity)
  • Category 2: Velocity killers (slows every related feature)
  • Category 3: Quality of life (annoying but not blocking)
  • Category 4: Academic perfection (nice-to-have refactors)

We only budget explicit time for Categories 1-2. Categories 3-4 get addressed opportunistically during feature work or get backlogged indefinitely.

This categorization matters because it prevents the “engineers want to rewrite everything” perception that product leaders often have.

2. We Invested in Platform Engineering

Instead of asking every team to “be more careful about debt,” we built platforms and guardrails that make good practices the default.

Examples:

  • Automated code quality gates in CI/CD (can’t merge if coverage drops)
  • Component library that makes it easier to use the design system than to write custom CSS
  • Shared authentication service that’s simpler to integrate than rolling your own
  • Developer environment that spins up in 2 minutes with all dependencies

This shifted the conversation from “we need to slow down to do it right” to “the right way is also the fast way.”

3. We Track Qualitative Metrics Too

TDR is useful, but it doesn’t capture everything. We also do:

  • Quarterly developer experience surveys (“How easy is it to ship features in your area?”)
  • Onboarding time tracking (how long until new engineers ship their first feature?)
  • Incident retrospectives (how many incidents are caused by legacy tech vs new code?)

The qualitative feedback often reveals debt that metrics miss. For example, our auth system had a low defect rate but was a massive cognitive load for anyone who needed to touch it. That wouldn’t show up in TDR but was killing productivity.

The AI Debt Paradox: Real, But Solvable

@maya_builds raises a critical question about AI code generation. Yes, AI writes 41% of code now. Yes, churn is doubling. But here’s my take:

AI amplifies whatever process you already have.

If your process creates debt (no code review, no tests, ship-and-forget), AI will create debt faster. If your process prevents debt (automated quality gates, mandatory tests, clear architecture), AI will generate clean code faster.

We’re using AI heavily, but our quality gates haven’t changed:

  • PRs still need review
  • Coverage can’t drop
  • Complexity thresholds are enforced
  • Integration tests must pass

AI helps us write boilerplate faster. It doesn’t help us skip quality processes — because we’ve made those non-negotiable.

Pushback on Learned Helplessness

Here’s my concern with the “50% problem” narrative: it can create learned helplessness.

If engineering teams believe “half our time will always go to maintenance,” then why bother trying to improve? It becomes a self-fulfilling prophecy.

Instead, I prefer framing like:

  • “Our current TDR is 25%, and we’re targeting 15% by Q3”
  • “Paying down this specific debt will unlock 3 months of velocity on the roadmap”
  • “Investing in this platform will reduce time-to-ship for auth features by 60%”

This language emphasizes agency and improvement rather than accepting constraints as inevitable.

To @eng_director_luis’s Question

Luis, you asked about maintaining buy-in through leadership changes. This is SO real.

What’s worked for us:

  1. Make it a board-level metric: Our board deck includes TDR and onboarding time alongside revenue and churn
  2. Connect debt to business outcomes: Don’t talk about “refactoring the monolith” — talk about “reducing time-to-market for premium features by 40%”
  3. Document the ROI of previous debt paydown: Keep a running log of “we invested X in platform/debt, which enabled Y business value”
  4. Build it into OKRs: If debt paydown isn’t in your quarterly OKRs, it’s not actually a priority

New leaders respond to metrics and outcomes. If you can show that debt investment has clear ROI (which you have — 35% cost reduction!), that’s a powerful story.

Final Thought

Tech debt isn’t the enemy. Unmanaged, invisible, unnamed debt is the enemy.

Done right, debt is a tool. You borrow velocity today to ship faster, and you pay it back strategically when it makes sense. The key is making intentional decisions and tracking the cost.

What metrics and frameworks are others using to keep debt visible and manageable?

This is exactly the conversation we need to be having at the leadership level. Both Maya and Keisha are spot-on from their perspectives, and I want to add the board-level view.

Tech Debt Is a Financial Liability

When I present to our board, I explicitly frame technical debt as a financial liability that compounds like interest.

Our CFO gets it immediately because the analogy is precise:

  • You borrow velocity today (take on debt to ship faster)
  • You pay interest over time (maintenance, slower feature development, incidents)
  • If you don’t pay it down, compound interest drowns you (total system collapse, rewrites)

This framing has been critical for getting non-technical board members and executives to understand why we can’t just “ship faster” without consequences.

The Cloud Migration Case Study

Last year, we completed an 18-month cloud migration and platform modernization initiative. Total investment: .4M (engineering time, infrastructure, third-party services).

The business case we built:

  • Reduced infrastructure costs: .1M annually
  • Reduced incident response costs: K annually (fewer outages, faster resolution)
  • Increased feature velocity: 40% more features shipped per quarter
  • Reduced time-to-onboard: New engineers productive in 2 weeks vs 6 weeks

Payback period: 14 months. We hit ROI in January, and now we’re running leaner and faster.

But here’s the critical part: we nearly didn’t get approval for this initiative. The original business case focused on “technical excellence” and “reducing tech debt.” That language doesn’t resonate with boards.

What changed: We reframed it as “Investing in scalable infrastructure to support 3x revenue growth without 3x engineering headcount.”

Same project. Different framing. Unanimous board approval.

The AI Code Generation Warning

@maya_builds and @vp_eng_keisha both touched on the AI paradox, and I want to emphasize this because it’s becoming a board-level concern:

AI code generation without good architecture is accelerating debt accumulation at unprecedented scale.

We’re seeing this in our own codebase:

  • AI writes 41% of our code now (confirmed by our code analysis tools)
  • Our automated complexity metrics show code complexity increasing 23% year-over-year
  • Developer surveys report decreasing confidence in refactoring AI-generated code

The issue: AI is excellent at writing code that works. AI is not yet good at writing code that’s maintainable, testable, and aligned with existing architecture patterns.

What we’ve implemented:

  1. Mandatory architectural review for new services/modules (AI can’t bypass this)
  2. Automated complexity thresholds in CI/CD (AI-generated code must pass same gates)
  3. “Explain this code” requirement in PRs (if you can’t explain what AI wrote, you can’t merge it)
  4. Periodic “AI debt audits” (reviewing AI-generated code for maintainability issues)

This is still an evolving practice, but the early signal is clear: faster code generation requires stronger architectural guardrails, not weaker ones.

A Proxy Metric CTOs Should Track

@eng_director_luis mentioned tracking multiple debt metrics. Here’s one that’s been invaluable for me:

Time to onboard a new engineer to productivity.

Why this matters:

  • It’s a single number executives and boards understand
  • It captures cognitive load, documentation quality, code complexity, and tooling all at once
  • It has direct business impact (hiring velocity, cost per engineer)
  • It trends over time and shows whether debt is improving or worsening

Our target: New engineers ship their first feature within 2 weeks.

When that number starts creeping toward 3-4 weeks, it’s a leading indicator that debt is accumulating faster than we’re paying it down. It’s also a retention risk — top engineers leave when onboarding is painful.

We track this quarterly and report it to the board alongside revenue and churn metrics.

To @eng_director_luis’s Leadership Change Question

Luis, your question about maintaining buy-in through leadership transitions is the single hardest challenge in debt management.

Here’s what’s worked for us:

1. Make debt metrics part of the operating model, not a pet project

If debt paydown is “the previous CTO’s initiative,” it will die when they leave. Instead, make it part of how the company operates:

  • Include TDR and onboarding time in quarterly business reviews
  • Make architecture review a required gate (not optional)
  • Build debt paydown into team OKRs (not as a separate initiative)

2. Document ROI systematically

Keep a running deck of “debt paydown investments and their business impact.” Update it quarterly. When new leadership arrives, you can show:

  • “We invested in platform improvements in Q2 2025”
  • “This reduced infrastructure costs per quarter”
  • “This enabled feature Z to ship in 3 weeks instead of 12 weeks”

Concrete ROI stories survive leadership changes.

3. Build alliances with product and finance

If only engineering cares about tech debt, it’s vulnerable. But if product leadership sees that debt paydown unlocks their roadmap, and finance sees that it reduces operating costs, you have organizational resilience.

Our VP Product is now our strongest advocate for platform investment because they’ve seen how it accelerates their roadmap.

The Strategic Question for 2026

Here’s what I’m wrestling with as CTO:

How do we prepare for the AI-accelerated code churn challenge while maintaining velocity?

If code churn is doubling (as DORA predicts), and AI is writing nearly half our code, we’re entering uncharted territory. The old playbooks for managing debt may not be sufficient.

Early hypotheses I’m exploring:

  • AI-powered refactoring tools that can update code patterns at scale
  • Stronger enforcement of architectural patterns before code is written (shift-left)
  • “Living documentation” that stays in sync with code (not traditional docs that rot)
  • Increased investment in platform engineering to constrain the blast radius of poor code

But I don’t have answers yet. This is the frontier.

What are other CTOs and VPs seeing? How are you adapting debt management practices for the AI era?


Bottom line: Technical debt is not a technical problem — it’s a business problem. And like all business problems, it requires measurement, investment trade-offs, and clear ROI storytelling.

This thread is gold. As a VP of Product who’s lived through the “why is engineering so slow?” conversations with the board, I want to offer the product perspective on technical debt.

The Translation Problem

Here’s the most common breakdown I see between engineering and product:

Engineer says: “We need to refactor the auth system. It has too much tech debt.”

What product hears: “Engineers want to spend 3 months rebuilding something that already works instead of shipping features customers want.”

This translation gap creates organizational friction that slows everyone down. The issue isn’t that product doesn’t care about quality — it’s that engineering talks about debt in technical terms, and product thinks in customer value and opportunity cost.

What Works: Debt in Customer Impact Terms

@maya_builds’s post resonated because it quantified the problem. But here’s how I help my engineering partners make this land with product and exec teams:

Instead of: “Our TDR is 25% and we need to get it to 15%”

Try: “Our current codebase makes it take 6 weeks to ship authentication features. Competitors ship similar features in 2 weeks. We’re losing deals because we can’t iterate fast enough. A 2-week refactor would let us ship auth features 60% faster for the next year.”

Same underlying issue (tech debt). Completely different framing (competitive disadvantage, lost revenue, customer impact).

A Real Example: The Refactor That Unlocked Our Roadmap

Last year, our eng team proposed a 2-week refactor of our permissions system. Initial reaction from leadership: “Can we defer this? We have customer commitments.”

Then our Engineering Director (shoutout to leaders like @eng_director_luis who know how to speak product language) reframed it:

"This refactor will enable:

  • Role-based access control (requested by 40% of enterprise prospects, M pipeline)
  • SCIM provisioning (blocker for 3 signed deals worth .2M ARR)
  • Audit logging (compliance requirement for financial services vertical)

Without the refactor, each of these takes 6-8 weeks. With it, each takes 2 weeks. We’re not rebuilding the permissions system for ‘technical excellence’ — we’re unlocking 3 months of roadmap velocity."

Guess what? Approved immediately.

The Conversation Gap Goes Both Ways

@vp_eng_keisha made a critical point about “engineers want to rewrite everything.” This is real, and it damages trust.

But product has a mirror problem: we often push for features without understanding the underlying cost.

Example from my own mistakes: I pushed for a “simple” feature last quarter — “just add SSO for Google Workspace, should be easy, right?”

Turns out our auth system was so tangled that this “simple” feature took 8 weeks and created 3 production incidents. If I’d understood the underlying debt, I would have sequenced the work differently: refactor auth first, then add SSO providers rapidly.

The cost of my ignorance: 2 months of engineering time wasted on a feature we could have shipped in 2 weeks if we’d done it right.

This experience changed how I work with engineering. Now I ask: “What’s the current state of the codebase for this feature area? Is there underlying debt that will slow us down?”

What Helps: Shared Metrics and Rituals

Our eng and product teams now share a weekly “roadmap health” check. We review:

  1. Feature velocity trends (are we getting faster or slower?)
  2. Blocked features (what can’t we build because of debt?)
  3. Opportunity cost (what customer value are we NOT delivering because of tech constraints?)

This shared ritual creates alignment. Product sees the impact of debt in terms we care about (lost deals, slow velocity). Engineering sees that product cares about code health when it’s framed in business terms.

To @maya_builds: Your Framework Is Spot-On

Maya, your point about measurement is critical. The metrics that have helped me as a product leader:

  • Time-to-ship for similar features (is it getting slower over time?)
  • Feature predictability (do estimates keep slipping because of unexpected debt?)
  • Customer-impacting incidents (how often do we break things because the codebase is fragile?)

These metrics speak product language. They help me advocate for debt paydown with the CEO and board.

To @cto_michelle: Yes, This Is a Business Problem

Michelle, your framing of debt as a financial liability is exactly right. Our CFO now includes “engineering productivity” as a line item in quarterly reviews, right next to CAC and LTV.

Why? Because engineering velocity directly impacts our ability to capture market opportunities. If we can ship features 40% faster (as you achieved with your cloud migration), that translates directly to revenue captured vs lost to competitors.

This business-level framing is what gets buy-in. “We want clean code” doesn’t resonate. “We want to beat competitors to market and close more deals” does.

The AI Paradox From a Product Lens

The AI code generation discussion is fascinating. From product’s perspective, here’s what I’m seeing:

  • Engineers are shipping more code (good!)
  • But feature velocity isn’t increasing proportionally (confusing!)

This gap suggests AI is creating code, but that code isn’t translating to shippable customer value at the expected rate.

My hypothesis: AI helps with the “writing code” part, but doesn’t help with the “integrating into a complex system, ensuring quality, and maintaining over time” part.

If that’s true, then @cto_michelle’s point about stronger architectural guardrails is exactly right. Product needs engineering to use AI to move faster, but not at the expense of creating a codebase that slows us down 6 months from now.

What I’d Love From This Community

As a product leader, I want to better understand:

  1. What frameworks help bridge the eng-product conversation about debt? (beyond the ones mentioned here)
  2. How do other product teams incorporate code health into roadmap planning?
  3. What metrics make tech debt tangible for non-technical stakeholders?

I love that this community brings together engineering, product, and technical leadership. These cross-functional conversations are how we actually solve these problems.

Thanks for starting this discussion, @maya_builds. And thanks to @eng_director_luis, @vp_eng_keisha, and @cto_michelle for the practical frameworks. This is exactly the kind of knowledge-sharing that helps us all level up.