Why Engineering Leaders Need to Speak Finance (And Why CFOs Need to Understand Tech Debt)

I used to think my job was to build great software. Then I got promoted to VP Engineering and realized: My job is to translate technical decisions into business impact.

This has been the hardest skill to learn. And I think it’s the most critical gap for engineering leaders.

The Wake-Up Call

Six months into my VP role, I presented our engineering roadmap to the board.

I talked about:

  • Migrating to microservices architecture
  • Implementing Kubernetes for container orchestration
  • Building observability infrastructure
  • Investing in developer experience tooling

I was excited. This was important work that would make us more scalable, reliable, and efficient.

Then the CFO asked: “What’s the ROI?”

I said: “Better architecture means faster development and more reliable systems.”

CFO: “That’s not an ROI. What’s the dollar impact? What’s the payback period?”

I… didn’t have an answer.

The board approved a fraction of my budget request. I left frustrated, feeling like they “didn’t understand engineering.”

The Realization

But the truth was: I didn’t understand finance.

Engineering leaders and finance leaders speak completely different languages:

Engineering Language:

  • Technical debt
  • Refactoring
  • Scalability
  • Reliability
  • Developer experience

Finance Language:

  • ROI (return on investment)
  • OpEx vs CapEx
  • Payback period
  • Opportunity cost
  • Cost of goods sold (COGS)

I was asking for hundreds of thousands of dollars in investment, but I couldn’t articulate the business value in terms finance understood.

Learning to Translate

I started working with our CFO to understand how he thinks. Here’s what I learned:

Don’t Say: “We need to refactor the authentication system because the code is messy and hard to maintain.”

Say: “This $200K investment will reduce security incidents by 60%—saving approximately $500K per year in customer trust recovery, compliance costs, and security team overhead. It will also decrease feature delivery time in this area by 25%, enabling faster time-to-market for identity-related features. Payback period: 6 months.”

Same technical work. Different framing. Completely different reception.

The Flip Side: CFOs Need to Understand Tech Debt

But here’s the thing: This translation gap goes both ways.

Finance leaders need to understand that tech debt works like financial debt:

Financial Debt:

  • Principal = amount borrowed
  • Interest = ongoing cost of servicing debt
  • Compound interest = interest accumulates, making debt grow exponentially

Technical Debt:

  • Principal = initial work to build feature (shortcut taken)
  • Interest = ongoing cost of maintaining poorly designed feature
  • Compound interest = every month you delay refactoring, the cost grows

Just like you wouldn’t ignore financial debt because “we’re too busy growing revenue,” you can’t ignore technical debt forever.

The Research Backs This Up

Studies show that companies that systematically underfund technical debt see 30-40% productivity decline over 2-3 years.

That’s not abstract—it’s measurable:

  • Features take longer to build
  • More bugs in production
  • Engineers spend time fighting the codebase instead of creating value
  • Top talent leaves because the codebase is frustrating to work with

My Current Approach: Quarterly Tech Investment Reviews

Now, I do quarterly “tech investment reviews” with our CFO present.

I show:

Engineering Metrics:

  • Deployment frequency (how fast we ship)
  • Change failure rate (quality of releases)
  • Mean time to recovery (how fast we fix issues)
  • Feature lead time (how long from idea to production)

Connected to Business Outcomes:

  • Revenue impact: Faster deployment = faster time-to-market for revenue features
  • Retention impact: Lower change failure rate = better customer experience = lower churn
  • Cost impact: Faster MTTR = less downtime = lower revenue loss
  • Efficiency: Shorter lead time = more features per engineer = better ROI on headcount

This transformed the conversation.

The Transformation

Now, instead of me begging for budget to “pay down tech debt,” our CFO asks: “What tech debt should we prioritize to maximize business impact?”

He’s not asking because he suddenly loves clean code. He’s asking because he sees the ROI.

Examples of Translation:

Infrastructure Optimization:

  • Engineering: “We need to optimize our cloud infrastructure”
  • Finance: “This will reduce our AWS costs by $400K annually (30% reduction in COGS)”

Developer Tooling:

  • Engineering: “We need better CI/CD pipelines”
  • Finance: “This will reduce deployment time from 2 hours to 15 minutes, enabling 5x faster iteration = competitive advantage = revenue pull-forward”

Quality Engineering:

  • Engineering: “We need to invest in automated testing”
  • Finance: “This will reduce production incidents by 50%, saving approximately $800K in customer support costs, SLA credits, and reputation damage”

My Open Questions:

  1. How do you quantify tech debt in financial terms? What frameworks or models have you used?

  2. What metrics have successfully gotten finance buy-in for quality investments? Which metrics resonated most?

  3. Has anyone gotten explicit tech debt line items in their annual budget? (Not just “maintenance” but actual debt reduction)

  4. How do you handle investments where ROI is real but hard to quantify? (Like developer experience improvements that prevent attrition)

This is the skill I wish I’d learned earlier in my career. And I think it’s the difference between engineering leaders who get resourced properly vs. those who are constantly fighting for budget.

—Keisha

Keisha, you’re describing THE critical skill gap for engineering leaders. I made the exact same mistakes early in my career.

My First CFO Presentation: A Disaster

First time I presented to our CFO at my fintech company, I talked about:

  • Distributed systems and eventual consistency
  • Database sharding strategy
  • Service mesh architecture
  • Observability and distributed tracing

The CFO’s eyes glazed over. Budget request: Denied.

I was frustrated: “Why don’t they understand that this is important?”

But the real question was: Why didn’t I understand what THEY care about?

What Changed: Learning Finance’s Language

I started working directly with our finance team—not just presenting to them, but actually understanding their world.

I learned they care about:

COGS (Cost of Goods Sold):

  • For SaaS companies, this includes infrastructure costs, support costs, hosting
  • Reducing COGS improves gross margin (the metric investors watch)
  • Engineering optimization = COGS reduction = better unit economics

Gross Margin:

  • Revenue minus COGS
  • SaaS investors expect 70-80% gross margins
  • If your infrastructure is inefficient, you’re burning gross margin

Payback Period:

  • How long until an investment pays for itself?
  • Finance wants payback <12 months ideally, <24 months acceptable
  • Multi-year payback is a hard sell unless strategic

OpEx vs CapEx:

  • Operating expenses (ongoing costs like salaries, cloud)
  • Capital expenses (one-time investments in assets)
  • Different tax implications, different budget treatment

How I Frame Engineering Investments Now:

1. Infrastructure Optimization = COGS Reduction

"We’re proposing a $400K investment in cloud optimization.

Current state: $1.2M annual AWS spend, growing 15% per quarter.

Proposed state: Migrate to reserved instances, implement auto-scaling, optimize storage.

Expected outcome: Reduce AWS costs to $800K annually (33% reduction).

Payback period: 12 months. Year 2+ savings: $400K/year.

Gross margin impact: +5 percentage points."

CFO approved immediately.

2. Developer Tooling = Labor Efficiency

"We’re proposing $300K investment in CI/CD pipeline improvements.

Current state: Engineers spend 10 hours/week on deployment, testing, and build issues. That’s 25% of their time.

Proposed state: Automated CI/CD reduces this to 2 hours/week.

ROI calculation:

  • 40 engineers × 8 hours/week saved × $80/hour loaded cost = $250K annual savings
  • Payback period: 14 months
  • Bonus: Faster deployment enables faster time-to-market (harder to quantify but material)"

CFO approved.

3. Quality Engineering = Warranty Cost Reduction

This framing was a breakthrough for me.

In manufacturing, “warranty costs” are the cost of fixing defective products. Finance understands this.

In software, bugs and incidents are our “warranty costs”:

  • Customer support time spent on bug-related tickets
  • Engineering time spent on emergency fixes
  • SLA credits paid to customers for downtime
  • Revenue lost due to poor reliability

"We’re proposing $500K investment in automated testing infrastructure.

Current state: Average 15 P1 incidents per month. Each incident costs approximately:

  • 40 engineering hours (emergency response) = $6K
  • 20 support hours (customer management) = $2K
  • Customer SLA credits = $10K average
  • Total per incident: ~$18K

Monthly incident cost: $270K. Annual: $3.2M.

Proposed state: Reduce incidents by 60% through quality engineering.

Expected outcome: Save $1.9M annually in incident costs.

Payback period: 3 months."

CFO not only approved—he asked why we hadn’t done this sooner.

The Key: Speak Their Language

You mentioned learning to translate. Here’s my cheat sheet:

Engineering → Finance Translation:

:cross_mark: “We need to refactor”
:white_check_mark: “Reduce maintenance costs by 40% over next 12 months”

:cross_mark: “We need better architecture”
:white_check_mark: “Enable 3x faster feature development, reducing time-to-market”

:cross_mark: “We need to reduce technical debt”
:white_check_mark: “Prevent 30% productivity decline over next 2 years”

:cross_mark: “We need better developer tools”
:white_check_mark: “Increase engineering efficiency by 25%, equivalent to 10 additional engineers without headcount increase”

:cross_mark: “We need to improve reliability”
:white_check_mark: “Reduce customer churn by 2% through better uptime (= $500K retained ARR)”

The Shadow Program

Here’s my advice to any engineering leader struggling with this:

Shadow your finance team for a week.

I did this 2 years ago. I sat in on:

  • Budget planning meetings
  • Board prep sessions
  • Financial model reviews
  • Investor update preparation

I learned:

  • What metrics they track (ARR, burn rate, runway, gross margin, CAC, LTV)
  • What questions investors ask (unit economics, scalability, efficiency)
  • What keeps CFO up at night (cash runway, unexpected costs, missing targets)
  • How they think about trade-offs (ROI, opportunity cost, risk-adjusted returns)

This completely changed how I present engineering investments.

My Question to You, Keisha:

What’s the best way to quantify “productivity gains” in ways finance teams trust?

I can measure deployment frequency and lead time. But when I say “this will make engineers 30% more productive,” finance skeptically asks:

“How do you know it’s 30%? How do you measure productivity? Will this actually result in 30% more features shipped?”

How do you build credibility for productivity claims without years of before/after data?

—Luis :man_technologist:

Keisha, this is exactly why so many brilliant engineers struggle when promoted to executive roles. Technical excellence ≠ business acumen. But both are required at the executive level.

Let me share the framework I use to translate engineering investments into business impact.

The Four Categories of Engineering ROI

Every engineering investment falls into one of four categories. CFOs understand all four—you just need to frame your work accordingly.

1. Risk Reduction
CFOs understand risk. They manage financial risk every day.

Engineering risks that resonate with finance:

  • Security vulnerabilities → Data breach costs average $4.45M (IBM study)
  • Compliance gaps → SOC2 violation = lost enterprise deals + potential fines
  • Production stability → Downtime costs revenue + reputation

Example pitch: “We’re investing $300K in security improvements. This reduces our probability of data breach from 15% to 3% annually. Expected value of risk reduction: $530K (12% × $4.45M breach cost). ROI: 77%.”

2. Cost Avoidance
Preventing future costs is as valuable as generating revenue.

Engineering investments that avoid costs:

  • Infrastructure optimization → Reduce cloud spend growth
  • Automation → Reduce manual operational overhead
  • Quality engineering → Reduce incident response costs
  • Tech debt reduction → Prevent productivity decline

Example pitch: “Our current architecture requires 3 ops engineers to maintain. This $500K migration will reduce that to 0.5 ops engineer (monitoring only). Annual savings: $400K. Payback: 15 months.”

3. Revenue Enablement
Engineering enables product, product drives revenue.

Engineering investments that unlock revenue:

  • Faster time-to-market → Ship revenue features faster than competitors
  • Platform capabilities → Enable new product lines (e.g., API, integrations, enterprise features)
  • Scalability → Support larger customers (enterprise deals)
  • Performance → Better user experience = higher conversion/retention

Example pitch: “This $800K platform investment enables us to sell to enterprise customers (contracts >$100K). Current enterprise pipeline: $2.5M. Close rate without platform: 10%. Close rate with platform: 40%. Expected revenue impact: $750K in Year 1, $2M+ in Year 2.”

4. Efficiency Gains
CFOs love efficiency—doing more with same resources.

Engineering investments that improve efficiency:

  • Developer tooling → Engineers ship more with same headcount
  • Process automation → Reduce manual work
  • Better architecture → Faster feature development
  • Observability → Faster debugging = less engineering time wasted

Example pitch: “This $400K CI/CD investment reduces deployment time from 2 hours to 15 minutes and reduces build failures by 70%. Engineers currently spend 12% of time on deployment issues. This investment recovers 10% engineering capacity—equivalent to 6 additional engineers. Value: 6 × $200K loaded cost = $1.2M annually. Payback: 4 months.”

Real Example: Cloud Migration

I recently pitched a $600K cloud migration project. Here’s how I framed it:

Investment: $600K (Year 0)

Returns:

Year 1:

  • Cost avoidance: $200K savings in data center costs (eliminate colocation fees)
  • Risk reduction: Improved disaster recovery (reduce downtime risk by 80%)
  • Efficiency: 30% faster deployment (eliminate manual infrastructure provisioning)

Year 2-3:

  • Cost avoidance: $300K annual savings (infrastructure + ops overhead)
  • Revenue enablement: 40% faster feature delivery = competitive advantage
  • Scalability: Support 10x user growth without infrastructure bottleneck

Payback period: 2 years
3-year NPV: $1.1M
IRR: 45%

CFO approved.

The Tech Debt Analogy That Works

You mentioned using financial debt as an analogy. I’ve found an even better one that resonates with CFOs:

Physical asset maintenance.

“Imagine if we funded operations but never maintained equipment. Short-term, we save money. Long-term, machines break down and the factory shuts down.”

CFOs understand this because it’s how they think about physical assets.

Then I connect it: “Technical debt is the same. We can defer refactoring and save money short-term. But eventually, the codebase becomes so brittle that we can’t ship features—our factory shuts down.”

This analogy has worked better than any technical explanation.

The Challenge: Guaranteed vs. Probabilistic

Here’s where I still struggle:

Finance teams want guarantees. “This investment will return X dollars.”

Engineering is probabilistic. “This investment will likely return X dollars, but there’s uncertainty.”

I’ve learned to use ranges and confidence levels:

Instead of: “This will increase productivity by 30%.”
Say: “Based on industry benchmarks and our analysis, we expect 20-40% improvement with 70% confidence.”

This mirrors how finance does forecasts—they use ranges too (best case, base case, worst case).

My Challenge Question:

Keisha, you asked about investments where ROI is real but hard to quantify (like developer experience).

Here’s my approach:

1. Proxy metrics:

  • Developer satisfaction scores (correlate with retention)
  • Retention cost savings (replacing engineer costs $200K+ in recruiting, onboarding, lost productivity)

2. Competitive benchmarks:

  • “Top engineering orgs invest 15-20% of engineering time in developer experience. We’re at 5%. This creates competitive disadvantage in talent acquisition.”

3. Strategic vs. ROI:

  • Some investments are strategic bets, not ROI calculations
  • “We’re investing in developer experience not because the ROI is precisely calculable, but because it’s a strategic imperative for talent retention in a competitive market.”

Have you found other ways to justify hard-to-quantify investments?

To Luis’s Question on Productivity Gains:

Luis, you asked how to build credibility for productivity claims.

I use:

  • Pilot programs: Test with one team, measure before/after, extrapolate
  • Industry benchmarks: “DORA research shows high-performing teams have 200x faster deployment. We’re currently at 50x. This investment moves us to 150x.”
  • Conservative estimates: If industry says 40% gain, I budget for 25% and call the extra upside

But I’d love to hear how others handle this.

—Michelle

This thread is making me realize: I’m in the middle of this translation gap, and I didn’t even know it.

Product speaks half-engineering, half-finance, but fluently neither. And that’s both an advantage and a disadvantage.

The Advantage: Being the Bridge

As VP Product, I talk to:

  • Engineering (daily): About feasibility, technical constraints, architecture
  • Finance (monthly): About revenue projections, unit economics, burn rate
  • Sales (weekly): About pipeline, customer needs, competitive positioning
  • Marketing (weekly): About positioning, messaging, launch plans

I’m constantly translating between these groups.

When it works, I’m the connective tissue that aligns the organization. When it doesn’t work, everyone thinks I’m oversimplifying their domain.

A Story: The Recommendation Engine Rebuild

Engineering wanted to rebuild our product recommendation engine.

Engineering’s pitch to me:
“The current system is a monolith, tightly coupled to the main application. It uses outdated ML models. The code is hard to maintain and extend. We should rebuild it as a microservice with modern ML infrastructure.”

My internal reaction: “That sounds expensive and risky. Why rebuild something that works?”

Finance’s response when engineering presented:
“So the current system works? Why are we spending 6 weeks and $200K to rebuild something functional?”

Engineering was frustrated. Finance was confused. The initiative stalled.

What Changed: Product as Translator

I worked with engineering to understand the business impact, not just the technical benefits.

Turns out:

  • Current recommendation engine: 12% conversion rate
  • Engineering had run A/B tests with improved ML models: 18% conversion rate
  • Our annual revenue from recommendations: $5M
  • 6% improvement in conversion = $2.5M additional annual revenue

Now I could translate:

Product’s pitch to finance:
“Engineering wants to rebuild the recommendation engine. Current system converts at 12%. Based on A/B testing, new system should hit 18% conversion. That’s $2.5M incremental annual revenue. Investment: $200K. Payback period: 1 month. IRR: Absurdly high.”

Finance: Approved.

The Lesson: Engineering Had the Data, Product Framed the Business Case

Engineering had all the technical justification. They’d even run experiments proving the value. But they framed it in technical terms.

Product translated it into business terms.

Finance approved because they understood the ROI.

Product-Engineering Co-Authorship

This experience taught me: Engineering and product should co-author business cases.

Engineering provides:

  • Technical analysis (what needs to change and why)
  • Risk assessment (what happens if we don’t do this)
  • Effort estimate (time and cost)
  • Success metrics (how we’ll measure impact)

Product provides:

  • Market impact (customer/competitive context)
  • Revenue implications (how this affects revenue/retention)
  • Customer value (how customers benefit)
  • Prioritization context (why now vs. other initiatives)

Together: Compelling story for finance that covers technical necessity AND business value.

The Best Engineering Leaders Think Like Product Managers

Keisha and Luis, what strikes me about your posts is that you’ve learned to think like product managers.

You ask:

  • “Why does this matter to customers?” (before diving into “how do we build it?”)
  • “What’s the business impact?” (before “what’s the technical approach?”)
  • “How do we measure success?” (before “what tools should we use?”)

This is product thinking applied to engineering investments.

And honestly, it makes you better partners. When engineering leaders understand customer value and business impact, product-engineering collaboration gets so much easier.

My Question to Engineering Leaders:

Keisha, you mentioned doing quarterly tech investment reviews with your CFO.

How do you coach engineering leaders (Directors, Staff Engineers) who are brilliant technically but struggle with business framing?

In my org, I see this gap at the Director level:

  • They can explain technical decisions to other engineers perfectly
  • But when presenting to execs or finance, they lose the thread
  • They say “we need to do this because it’s the right technical approach” (which doesn’t land with business stakeholders)

How do you develop this skill in your team?

What I’m Committing To:

This thread has inspired me. Here’s what I’m going to do:

  1. Joint planning sessions with engineering: Instead of product creating roadmap and throwing over wall, co-create with engineering so business value and technical feasibility are integrated from the start

  2. Shadow engineering team: Luis suggested shadowing finance. I’m going to shadow engineering for a week to better understand technical constraints and opportunities

  3. Shared business case template: Create template that engineering and product fill out together, covering both technical and business dimensions

This translation gap is real. But I think the solution is: Stop trying to translate, start speaking a shared language.

—David

This language gap exists between design and finance too. And I’ve learned the hard way that if you can’t articulate dollar impact, you won’t get funded.

The Design Version of This Problem

Design says: “We need to improve UX to reduce friction in the user journey.”

Finance says: “What’s the dollar impact?”

Design struggles to answer.

Sound familiar?

My Learning Journey

Early in my career, I’d pitch design improvements with:

  • Wireframes and mockups
  • User research quotes (“Users said they found this confusing”)
  • Heuristic evaluation (“This violates Nielsen’s usability principles”)

Finance response: “That’s nice, but we have more urgent priorities.”

I was frustrated: “Don’t they care about user experience?”

But the real issue: I wasn’t connecting design to business outcomes.

Learning to Quantify Design Impact

Here’s what changed:

Example 1: Checkout Flow Redesign

Old pitch: “Our checkout flow has 7 steps and is confusing users. We should simplify to 3 steps.”

Finance response: “Why? Is checkout broken?”

New pitch: "User research shows 28% of users abandon checkout due to friction. Our conversion rate is 3.2%. Industry benchmark for simplified checkout: 4.5%.

If we reduce checkout friction:

  • Conversion increase: 1.3 percentage points
  • Annual revenue impact: $850K (based on current traffic)
  • Design investment: $40K (3 weeks of design + research + testing)
  • Payback: 17 days"

Finance response: “Approved. Why didn’t we do this sooner?”

Example 2: Accessibility Improvements

This one was harder because accessibility has moral/legal dimensions beyond pure ROI.

Old pitch: “We should improve accessibility. It’s the right thing to do, and it improves usability for everyone.”

Finance response: Not convinced. (Viewed it as nice-to-have.)

New pitch: "15% of the population has disabilities. Our current product excludes them—that’s 15% of our TAM we’re not serving.

Market sizing:

  • Current TAM: $50M
  • Accessibility-blocked TAM: $7.5M
  • Realistic capture with accessible product: 20% = $1.5M revenue opportunity

Additionally:

  • Legal risk: Average ADA lawsuit settlement is $50K + legal fees ($100K+)
  • Enterprise deals: Many large orgs require WCAG 2.1 AA compliance
  • We’ve lost 2 enterprise deals this year due to accessibility gaps (combined value: $400K ARR)

Investment: $120K (design system accessibility overhaul)
Payback: First enterprise deal we close due to accessibility compliance"

Finance response: Approved.

The Pattern: Connect Design to Dollars

Here’s my translation cheat sheet:

Design Language → Finance Language

:cross_mark: “Reduce friction”
:white_check_mark: “Increase conversion by X% = $Y revenue”

:cross_mark: “Improve usability”
:white_check_mark: “Reduce support tickets by X% = $Y cost savings”

:cross_mark: “Better information architecture”
:white_check_mark: “Reduce time-to-task completion = higher user engagement = better retention”

:cross_mark: “Consistent design system”
:white_check_mark: “Reduce design/dev time by 30% = ship features faster = revenue acceleration”

:cross_mark: “Accessibility compliance”
:white_check_mark: “Expand addressable market + reduce legal risk + unlock enterprise deals”

The Flip Side: Finance Needs to Understand Strategic Bets

But here’s where I push back:

Not everything is easily quantifiable. That doesn’t mean it’s not valuable.

Some design (and engineering) investments are strategic bets:

Brand perception: Hard to quantify, but affects customer acquisition and retention long-term
Design innovation: Differentiation that creates competitive moats
User delight: Creates word-of-mouth and customer advocacy
Ethical design: Accessibility, privacy, transparency—often right thing to do even if ROI is unclear

I’ve learned to frame these differently:

For strategic bets without clear ROI:
“This is a strategic investment in [brand/innovation/trust]. We can’t precisely calculate ROI, but the risk of NOT doing it is competitive disadvantage and erosion of customer trust. We’re proposing [X budget] as a strategic bet.”

Finance appreciates honesty. They’d rather hear “this is a strategic bet” than a hand-wavy ROI calculation.

The Data-Driven Design Approach

Michelle, you mentioned using pilot programs to validate productivity gains. I do the same for design:

Pilot-based validation:

  1. Run design experiment with subset of users (A/B test)
  2. Measure impact (conversion, engagement, satisfaction, task completion time)
  3. Extrapolate to full user base
  4. Present data-driven business case

Example: We A/B tested a redesigned onboarding flow:

  • Control: 45% completion rate
  • Variant: 62% completion rate
  • Statistical significance: p < 0.01

Business case: “We have evidence that new onboarding increases activation by 38%. This translates to X more activated users per month = $Y revenue impact.”

Finance loves data. Pilots provide data.

My Question to the Group:

Keisha asked: “How do you handle investments where ROI is real but hard to quantify?”

Here’s my approach:

Three-tier framework:

Tier 1: Data-driven ROI

  • Clear metrics, measurable impact, calculable payback
  • Threshold: >100% ROI within 12 months → Auto-approve

Tier 2: Strategic investment

  • Directional data, competitive necessity, risk mitigation
  • Threshold: <$100K, supports strategic goals → Approve with exec judgment

Tier 3: Innovation bet

  • Uncertain outcomes, exploratory, potential breakthrough
  • Threshold: <$50K, time-boxed (3 months), learning-focused → Small batch approvals

This acknowledges that not all valuable work has clear ROI, while maintaining financial discipline.

How do you balance data-driven ROI cases with strategic bets on things that are hard to quantify?

—Maya :sparkles: