The CFO's Dilemma: When Your Board Wants AI Innovation But Also Wants to See the Receipts

I just walked out of our quarterly board meeting, and I’m sitting here with the most uncomfortable cognitive dissonance. On one hand, three different board members asked why we’re not moving faster on AI. On the other hand, our board-appointed finance committee chair spent 20 minutes grilling me about our current AI spend and what we have to show for it.

Welcome to the CFO’s life in 2026.

Here’s the situation: We’re a Series B fintech company, about 180 people, doing well but not crushing it. Our product team has been pitching AI initiatives for the past year. Some sound genuinely transformative. Some sound like buzzword bingo. The board sees competitors announcing AI features and gets nervous. Meanwhile, I’m getting monthly questions from our lead investor about our burn rate and path to profitability.

Last week, I read that 25% of planned AI investments are being deferred to 2027 as CFOs demand ROI first. That hit close to home because I just did exactly that - I pushed three AI projects to next year’s budget.

The Core Challenge

How do you evaluate ROI on transformative technology that hasn’t proven itself yet? Traditional finance metrics don’t quite work. If I apply our standard hurdle rate (300% ROI on discretionary tech investments), almost nothing AI-related makes the cut. But if I ignore financial discipline entirely, I’m not doing my job.

The projects I approved all had something in common: measurable outcomes within 90 days. The projects I deferred were all “trust us, this will be game-changing in 18-24 months.”

What’s Actually Working

I’ve landed on a phased approach that’s keeping both the board and our investors reasonably happy:

Phase 1 (60-90 days): Small investment (K-K), narrow scope, clear success metrics. We’re looking for 15-25% productivity improvement or cost reduction in a specific workflow. If we don’t see it, we kill the project. No sunk cost fallacy.

Phase 2 (90-180 days): If Phase 1 works, we scale it. Now we’re talking K-K. Success criteria: Can we get 3-5 teams using this? Are the productivity gains holding up?

Phase 3 (6-12 months): Full rollout and integration into product or operations. This is where we’d spend K+ and bake it into our systems.

Real Example: ML-Powered Fraud Detection

Our payment operations team proposed an ML model to detect fraudulent transactions. Traditional rules-based system was catching about 60% of fraud but generating tons of false positives.

  • Phase 1 investment: K (data science contractor + 3 weeks of eng time)
  • Timeline: 8 weeks
  • Results: Fraud detection rate improved to 78%, false positives dropped 40%
  • Measurable impact: Chargebacks reduced by K in first month
  • Decision: Approved Phase 2 immediately

That’s the kind of AI project that makes it through my filter. Clear problem, measurable baseline, demonstrable improvement, calculable ROI.

The Tension I’m Wrestling With

Here’s what keeps me up at night: Are we being penny-wise and pound-foolish?

The projects that passed my ROI test are all incremental improvements. Better fraud detection. Faster document processing. Improved customer support routing. These are good projects. They save money and time.

But none of them are transformational. None of them create entirely new capabilities or business models. I look at what companies like OpenAI or Anthropic are building, and I wonder if we’re optimizing our way into irrelevance.

My VP of Engineering keeps talking about “strategic optionality” - investments that don’t show immediate ROI but position us for future opportunities. She’s probably right. But I can’t take that to our board with a straight face. “We spent K to have optionality” doesn’t fly when we’re still burning cash.

The Market Context

The macro environment isn’t helping. Global AI spending is projected to hit .5 trillion this year, but only 14% of CFOs report measurable ROI from their AI initiatives. That’s terrifying. It means 86% of us are spending money and hoping it works out.

Our investors are asking harder questions. They want monthly ROI reports. They want to see “revenue impact per dollar spent” as a line item. The days of “we’re investing in AI for the future” without concrete returns are over.

What I Need From This Community

I know we have product leaders, CTOs, engineering directors, and other finance folks here. I’m curious:

  1. How are you balancing innovation budgets with accountability? What frameworks are you using?

  2. What AI investments have you made that didn’t show immediate ROI but proved valuable later? How did you justify them?

  3. For the engineering and product folks: What do you wish your CFO understood about AI investment that we’re probably missing?

  4. For other finance leaders: How are you measuring “strategic value” vs. direct ROI? Or are you just not funding the strategic stuff right now?

I want to be the CFO who enables innovation, not the one who kills it. But I also need to be the CFO who can explain our spend to the board and our investors. Right now, those two things feel in tension.

What am I missing?

Carlos, this resonates deeply. I’ve been in similar board meetings where the tension between innovation and accountability feels almost paralyzing.

I want to push back constructively on your ROI framework, though - not because it’s wrong, but because I worry it might be too narrow for platform technologies like AI.

The Infrastructure Problem

Your phased approach makes perfect sense for discrete AI features. Fraud detection, document processing, support routing - these have clear inputs and outputs. You can measure them.

But here’s what I’m wrestling with: Some AI investments are more like infrastructure than features. They don’t deliver ROI directly. They enable everything else to deliver ROI.

I’ll give you a real example from our current situation. Three years ago, we were debating a cloud migration. The initial business case looked terrible by traditional ROI metrics:

  • Cost: M over 18 months
  • Direct savings: Maybe K annually in data center costs
  • Payback period: 6+ years
  • My CFO’s verdict: “Doesn’t meet our hurdle rate”

We did it anyway (with a lot of board debate). Know what happened? That cloud infrastructure enabled us to:

  • Launch products 3x faster (measured by time-to-market)
  • Scale to handle 10x traffic without proportional cost increases
  • Build entire new product lines that weren’t possible on-premise
  • Attract engineering talent who wouldn’t have joined a company on legacy infrastructure

The ROI was real, but it was multiplicative across everything we built afterward. There was no way to put that in a spreadsheet upfront.

AI is Infrastructure

I’d argue that some AI investments - not all, but some - are the same category. ML platforms, model deployment infrastructure, AI tooling - these don’t generate revenue directly. But they determine whether your AI features succeed or fail.

Your fraud detection example is interesting. You spent K and got great results. But I bet underneath that success was:

  • Clean, accessible data pipelines
  • Engineers who knew how to train and deploy models
  • Infrastructure to serve predictions in real-time
  • Monitoring to catch model drift

If you didn’t already have those, that K project would have been K+ and taken 6 months instead of 8 weeks.

A Framework for Platform vs. Feature AI

I’ve started separating AI investments into two buckets:

Feature AI: Specific use cases with measurable outcomes

  • Your ROI framework works perfectly here
  • 90-day milestones, clear metrics, kill if it doesn’t work
  • Examples: Fraud detection, document extraction, recommendation engines

Platform AI: Infrastructure that enables multiple AI features

  • Longer evaluation window (12-24 months)
  • Success metrics: How many AI projects succeeded because of this? How much faster did they ship?
  • Examples: ML platform, feature stores, model monitoring, AI tooling

The platform investments are harder to justify with traditional ROI, but without them, even your successful feature AI projects become much riskier and more expensive.

The Question I Can’t Answer in Financial Terms

You asked about “strategic optionality” and said it doesn’t fly with boards. I hear you. But here’s my challenge back:

How do you measure the cost of NOT having optionality?

What’s the competitive cost when a rival ships AI features in 3 months that would take you 12 months because you don’t have the infrastructure? What’s the talent cost when your best ML engineers leave because they’re spending 60% of their time on deployment plumbing instead of solving interesting problems?

I don’t have a good answer for how to put this in a CFO-friendly spreadsheet. But I know the cost is real.

What I’m Actually Doing

In practice, I’m using a hybrid approach:

  1. 80% of AI budget: Your phased ROI model. Clear outcomes, short timelines, measurable impact.
  2. 20% of AI budget: Platform investments with longer horizons and different success metrics.

I defend the 20% to the board by showing how it makes the 80% more successful. “We shipped 5 AI features this quarter instead of 2 because we invested in deployment infrastructure last year.”

It’s not perfect, but it lets us balance near-term accountability with medium-term platform building.

Your Question About What Finance Is Missing

You asked what CFOs are missing about AI investment. Here’s mine: The binary thinking of “ROI or not” might miss the portfolio effect.

Individual AI projects might show 150% ROI, which fails your 300% hurdle rate. But a portfolio of 10 AI projects might generate network effects, shared infrastructure benefits, and organizational learning that pushes the portfolio ROI above 300%.

Finance is great at analyzing individual investments. The challenge with AI is that value often emerges from the interactions between investments, not just the sum of individual returns.

I don’t know how to model that rigorously. But I know it’s real.

Keisha, this resonates deeply. From the CTO seat, I can tell you this shift isn’t just important - it’s existential for scaling leadership.

I see this pattern constantly. Engineering leaders who made it to director or VP level based on technical excellence, then struggle because the job fundamentally changed under them. The skills that got them promoted become the very things holding them back.

A Recent Example from Our Cloud Migration

Last quarter, we kicked off a major cloud migration initiative - moving our entire platform from on-prem to AWS. The temptation for me was enormous. I spent the first decade of my career in infrastructure. I know cloud architecture. I could absolutely design this migration strategy.

But if I did that, several things would happen:

  1. I’d become the bottleneck - every decision waiting for my input
  2. My team wouldn’t develop the strategic thinking muscles they need
  3. I’d have zero time for the other five critical initiatives only I can drive
  4. We’d get my solution, not necessarily the best solution

Instead, I spent three weeks ensuring the team had crystal clear what/why/when:

  • What: Migrate to cloud with <2% downtime, maintaining compliance, within 18 months
  • Why: Our data center costs are unsustainable, we need global edge capabilities, and our current setup can’t support the geographic expansion strategy
  • When: Complete by Q3 2026 before European expansion launch

Then I stepped back. Completely.

The team came back with an approach I wouldn’t have thought of - a hybrid edge strategy using Cloudflare Workers for certain workloads that cut our projected costs by 30%. Because they owned the how, they felt empowered to innovate.

Trust Unlocks Innovation

Your point about trust is crucial. When leaders hold onto the how, we’re not just stealing implementation work - we’re signaling distrust. And distrust kills innovation.

I’ve noticed something interesting: the more clearly I articulate what success looks like and why it matters, the more comfortable I am letting go of how. When the constraints and outcomes are fuzzy, I want to jump in. When they’re sharp, I can trust.

The Most Overlooked: When

You mentioned when is most overlooked - absolutely correct. I see engineering leaders optimize for perfect technical solutions without considering timing. In business, a good solution now often beats a perfect solution later.

I’ve started asking my directs: “If we had to ship this in half the time, what would the solution look like?” Sometimes the answer is “impossible.” But often, it forces creative thinking about what’s truly essential vs nice-to-have.

The Question About Technical Credibility

I want to address the elephant in the room: how do you stay technically credible without being in the weeds?

My approach:

  • I review architecture decisions at the why level, not implementation details
  • I maintain technical credibility through asking good strategic questions, not proving I can still code
  • I stay current by reading, attending conferences, and occasional deep dives - but not on critical path projects
  • I hire people stronger than me technically and leverage their expertise

The credibility shift is hard. Early in my CTO journey, I felt I had to prove I could still out-code everyone. Now I prove value by asking questions the team hasn’t considered, connecting dots across initiatives, and clearing organizational obstacles.

Your student assessment example is perfect - you added value through strategic framing, not technical implementation. That’s leverage.

For Earlier Career Folks Reading This

If you’re an IC or early manager reading this and thinking “so leaders just stop being technical?” - that’s not it.

Great engineering leaders remain technically informed but focus their energy on strategically critical decisions. They understand technology deeply enough to:

  • Evaluate technical tradeoffs
  • Ask penetrating questions
  • Recognize when technical decisions have strategic implications
  • Hire and develop strong technical talent

But they’re not in the daily implementation because that’s not where they add unique value.

The transition Keisha describes? It’s necessary. And it’s what allows engineering organizations to scale beyond what one brilliant technical leader can personally architect.

This is music to my ears as a product leader. Keisha, you’re describing the kind of engineering leadership that makes product-engineering collaboration actually work.

The Frustration from the Product Side

I can’t tell you how many strategy meetings I’ve been in where we’re trying to discuss what to build and why, and the engineering leader derails into a 30-minute deep dive on how we’d implement it.

Not because it’s relevant to the decision at hand. Because that’s where they’re comfortable.

Meanwhile, critical questions go unanswered:

  • Does this align with our Q2 goals?
  • What’s the opportunity cost vs other initiatives?
  • Do we have the right team composition for this?
  • What risks are we taking on?

These are leadership questions. Technical depth doesn’t answer them.

When Engineering Leaders Speak Product Language

The best engineering partnerships I’ve had were with leaders who thought like you’re describing. They’d join product strategy sessions and ask:

  • “What problem are we solving?”
  • “What’s the user pain point?”
  • “Why is this more important than X?”
  • “What does success look like?”
  • “When do we need this live?”

Then, after we aligned on what/why/when, they’d go away and come back with options for how, including tradeoffs we needed to make.

Example: Last month we were planning a new enterprise feature. The engineering director asked, “Are we optimizing for getting first enterprise customer quickly, or building scalable enterprise foundation?”

That one question reframed our entire roadmap discussion. We realized we were trying to do both, which meant we’d deliver neither well. We chose speed to first customer - and that strategic input from engineering shaped our product strategy.

The Question I Have for You

Here’s what I’m curious about: how do you balance this strategic focus with staying technically credible with your team?

I’ve seen this go sideways when engineering leaders try to be strategic but lose connection with technical reality. They start making commitments that engineering teams can’t deliver, or pushing back on timeframes without understanding what’s truly hard vs what’s just unfamiliar.

How do you maintain enough technical depth to be credible without getting pulled back into the how?

The Product-Engineering Alignment Insight

I’m becoming convinced that product-engineering alignment dramatically improves when both sides focus on what/why/when.

When product brings “here’s what we need built” and engineering responds with “here’s how long it’ll take,” we’re in a transactional relationship.

When product brings “here’s the user problem and business goal” and engineering brings “here’s what’s possible and what the tradeoffs are,” we’re actually partnering.

Your shift toward what/why/when makes you a better partner for product - you’re speaking the same language of outcomes and strategy.

For the ICs Reading This

I want to echo what you asked earlier: what do ICs need from leaders?

From my product perspective, when engineering leaders focus on what/why/when, they:

  • Give teams context to make good decisions independently
  • Create space for engineers to solve problems creatively
  • Connect daily work to meaningful outcomes
  • Remove blockers that only leadership can remove

When they focus only on how, they become:

  • Bottlenecks for decisions
  • Creativity killers (“just do it this way”)
  • Disconnected from why the work matters
  • In competition with AI for implementation work

The AI point you made is crucial - if engineering leadership’s value is just “being really good at implementation,” that value is declining fast. Strategic thinking and outcome focus? Those are becoming more valuable.

As someone with 7 years in and starting to think about the leadership path, this thread is honestly both eye-opening and a little intimidating.

My Current Reality

Right now, my entire focus is on the how. Building clean, scalable systems. Writing good code. Understanding our technical stack deeply. Making smart architectural choices. That’s what I’m evaluated on. That’s what I’m good at.

The idea of shifting away from that feels like… giving up what makes me valuable?

The Question That’s Bugging Me

When in the career progression should this shift start happening?

Should I be thinking about what/why/when now as a senior engineer? Or is this purely a leadership skill that kicks in when you get the manager title?

I’m asking because I don’t want to get promoted to engineering manager and then realize I’ve spent my entire career optimizing for the wrong skills. But I also don’t want to neglect the technical depth that’s currently my job.

The Concern About Losing Touch

Keisha, you mentioned trusting your team with the how. But here’s my concern as someone who might take that leadership path:

If I’m not in the technical details anymore, won’t I lose touch with what’s actually hard vs what’s easy? Won’t I lose credibility with my team when I can’t understand their technical challenges?

I’ve had managers who got too far from the code. They’d make commitments we couldn’t hit because they didn’t understand the technical constraints. They’d push back on estimates without understanding the complexity. The team lost respect for them.

I don’t want to become that manager.

What I Appreciate from My Current Manager

Actually, thinking about it, my current manager does what you’re describing - and I really value it.

He doesn’t tell us how to build things. He gives us context:

  • What business problem we’re solving
  • Why it matters to our users
  • When we need it delivered
  • What constraints we’re working within (performance, security, cost)

Then he trusts us to figure out the how. And when we hit technical challenges, he helps us think through tradeoffs rather than dictating solutions.

The result: I feel ownership over my work. I’m developing problem-solving skills, not just implementation skills. And when I use AI tools to explore solutions faster, that accelerates everything.

So maybe I’m seeing the value of this approach as an IC, even if I’m not sure how to develop the skills myself.

The AI Angle

Your point about AI is making me think. If AI continues getting better at implementation, what’s my long-term value as an engineer?

I think it’s:

  • Understanding what problems are worth solving
  • Knowing why certain technical approaches matter in business context
  • Making good judgment calls on when to optimize vs ship
  • Connecting technical decisions to user and business outcomes

Which is… exactly what you’re describing for leadership. Maybe the shift isn’t as binary as I thought? Maybe great senior ICs need these skills too?

What I’d Want to Know

For those of you further along this path:

  • How do you know if leadership is right for you?
  • What’s the difference between senior IC strategic thinking and engineering leadership?
  • Can you develop what/why/when skills before taking a leadership role?
  • How do you maintain technical skills during the transition?

This conversation is making me rethink what career growth actually means. Maybe it’s not just “get better at implementation” → “manage people who implement.” Maybe it’s “understand implementation” → “understand systems” → “understand strategy” with different roles at each level.

Oh wow, Keisha - design leadership faces almost the exact same challenge, and your student assessment example gave me chills because it’s so parallel to how my startup failed.

The Design Version of This Problem

When I was running my startup, I was the design co-founder. And I made every mistake you’re describing, just from the design side instead of engineering.

I spent months obsessing over our design system. Perfect component library. Pixel-perfect implementations. Beautiful animations. I was so deep in the how of design - the visual details, the interaction patterns, the UI polish.

Meanwhile, we completely missed the what and why:

  • What problem were we actually solving for users?
  • Why would they switch from existing solutions?
  • When did they need specific capabilities vs nice-to-haves?

We built something gorgeous that solved the wrong problem. We launched 6 months late because I kept refining the design. And we failed because I was acting like a senior designer with a founder title, not a design leader thinking about outcomes.

The Parallel Realization

Reading your assessment engine example, here’s what hit me: you could’ve designed the perfect architecture. I could’ve designed the perfect interface. But both of us would’ve been optimizing the wrong thing.

Your team’s hybrid approach - extract the bottleneck, refactor incrementally - is exactly the kind of solution that emerges when people focus on outcomes instead of ideal implementations.

If I’d done that with my startup - focused on what gets us to product-market fit, why users would care, when we needed to ship - we might still exist.

The Question About Strategy Theater

But here’s my concern, and I’m genuinely curious how you handle this:

How do you prevent “strategy theater” where leaders avoid details not because they’re focused on strategy, but because they don’t want to do the hard work of understanding reality?

I’ve seen design leaders (and probably engineering leaders too) use “strategic focus” as an excuse to stay high-level and disconnected. They make sweeping strategic pronouncements without understanding the tactical constraints. They’re not delegating the how - they’re abdicating it.

That’s different from what you’re describing, where you deeply understood the technical context before trusting your team with implementation. But how do you know which side of the line you’re on?

Where Design and Engineering Align

The why is where design and engineering leadership should naturally align, and often don’t.

Engineering leaders focused on how ask: “What’s the technical architecture?”
Design leaders focused on how ask: “What’s the UI pattern?”

But if both ask why:

  • Why does this matter to users?
  • Why is this the right problem to solve?
  • Why now vs later?

Suddenly we’re in the same conversation, working toward the same outcomes.

In my current role leading a design system, I’ve tried to shift to this mindset. Instead of “which components should we build,” I ask “what product team capabilities are we enabling, and why do they need them?”

It’s hard though. The how of design - making things beautiful and functional - is genuinely satisfying. Stepping back to strategy feels less creative somehow, even though it’s probably more impactful.

For the Cross-Functional Folks

I think this conversation matters beyond just engineering. Product, design, and engineering leaders all need to make this shift from how to what/why/when.

The magic happens when all three are aligned on outcomes and trust each other with implementation in their domains. Product doesn’t tell design which UI to build. Design doesn’t tell engineering which code to write. Engineering doesn’t tell product which features to prioritize.

Instead, everyone contributes to what/why/when, then owns their part of the how.

That’s when you build great products. That’s what we failed to do at my startup - we optimized our individual hows without aligning on what/why/when.

Carlos, I feel this tension every single day. Product sits right in the middle of this - engineering wants to build AI because it’s exciting, finance wants ROI, and I’m trying to figure out what customers actually care about.

Your phased approach is smart, but I want to add a lens that’s been critical for us: customer-measured ROI beats internal efficiency metrics every single time when it comes to getting continued funding.

The Pattern We’ve Seen

Over the past year, we’ve proposed about 8 AI features. Three got built, five got deferred or killed. The difference wasn’t the technical sophistication or the internal ROI case. It was whether customers could measure the value themselves.

What Died:

  • AI writing assistant for our platform: We thought it would increase engagement. Built a prototype, some users liked it. But when we asked “how much time does this save you?” or “how much more would you pay for this?” - crickets. Engagement went up 12%, but no one could articulate why it mattered to their business.
  • AI-powered analytics insights: Same story. Cool feature, customers said “interesting” but when renewal time came, no one cited it as a reason to stay.

What Lived:

  • AI reconciliation matching: Customers told us they spend 4-6 hours weekly manually matching transactions. We built AI to do it in 15 minutes. Customers measure the time savings directly. They tell prospects about it. They’ve cited it in renewal conversations as a reason to expand seats.

The difference: Customers can put a dollar value on their own time savings. “This saves my team 20 hours monthly” × their internal hourly cost = clear ROI they can calculate.

The Framework That’s Working

Before we pitch any AI feature to engineering or finance, we now require:

  1. Jobs to be Done clarity: What painful job is the customer hiring AI to do? “Be more innovative” isn’t a job. “Reduce month-end close from 5 days to 2 days” is a job.

  2. Customer-measurable outcome: Can the customer measure the impact without our help? Time saved, errors reduced, revenue increased - something they can see in their own metrics.

  3. Willingness to pay signal: In customer interviews, when we describe the feature, do they ask about pricing? Do they say “we’d pay extra for that”? Or do they say “that’s nice to have”?

If we can’t get clear answers on all three, it doesn’t make it to the roadmap discussion with you and finance.

The AI Trap We Almost Fell Into

Early on, we were pitching AI features based on what was technically possible, not what solved expensive customer problems. Engineering would say “we could use LLMs to generate insights from usage data” and we’d get excited about the technology.

But when we talked to customers, they didn’t see that as a top-3 problem. They wanted faster transaction processing, better fraud detection, easier reconciliation. Boring problems with expensive price tags.

We killed three “innovative” AI features that would have been engineering wins but product failures. Meanwhile, the “boring” AI - automated data extraction, reconciliation matching, anomaly detection - these are the features customers mention in case studies.

The Question for Finance

Carlos, here’s my question back: If we can show customers are willing to pay specifically for an AI feature, does that change your ROI calculation?

For example, if 30% of our customers would pay an extra /month for AI reconciliation matching, that’s potentially K in new ARR (assuming 30 paying customers). Does that kind of customer-validated revenue potential meet your hurdle rate for a K development investment?

I’m asking because I think there’s a disconnect. Engineering often pitches AI based on internal efficiency (“this will make our support team 20% more productive”). But finance and product should be aligned on customer value (“this will generate in new revenue or prevent in churn”).

What I Wish CFOs Understood

You asked what product folks wish CFOs understood. Here’s mine: Not all AI features with “low” initial adoption are failures.

We built an AI-powered API integration assistant last year. Only 8% of customers use it. By adoption metrics, looks like a failure. But those 8% are all enterprise customers, and three of them cited it as THE reason they didn’t churn when we had a competitor trying to poach them.

The LTV impact of preventing one enterprise churn (K+ annually) justified the entire development cost. But if you only look at adoption rates, you’d kill the feature.

Sometimes the ROI is concentrated in a small segment but extremely high per customer.

Product and finance need to be aligned on: Are we measuring the right things? Or are we measuring what’s easy to measure?

Carlos, I appreciate you starting this conversation. As someone who manages engineering teams and has to translate between technical possibilities and business realities, I live in this tension daily.

What I want to add to this discussion - and I don’t think anyone’s talking about this enough - is the hidden cost of the ROI pressure on engineering teams and talent retention.

The Morale Impact

I’ve watched what happens when AI projects get cut mid-stream. It’s not just about the sunk cost or the lost opportunity. It’s about what it does to the team.

Three months ago, we had an ML engineer working on a recommendation engine. She spent 6 weeks on it, made real progress, early results were promising. Then finance pulled the budget because we couldn’t show clear ROI in the timeline you’re describing.

That engineer left the company two months later. In her exit interview, she said: “I want to work somewhere that values innovation, not just incremental improvements.”

The cost of replacing her: K+ in recruiting, 3-4 months of ramp time for the new hire, lost knowledge about our systems. That’s not in your ROI calculation for killing the project, but it’s real.

The Innovation Budget Framework

Here’s what I’ve implemented, and it’s helped balance finance pressure with team morale:

I split our engineering budget into two categories:

  • Production Budget (85%): These are your ROI-driven projects. Clear business case, measurable outcomes, finance-approved.
  • Innovation Budget (15%): Reserved for exploration, learning, building future capabilities.

The innovation budget has different success criteria. We’re not measuring immediate ROI. We’re measuring:

  • Did we learn something valuable?
  • Did we build a capability that positions us for future opportunities?
  • Did we keep our best engineers engaged and growing?

That recommendation engine? If it had come from innovation budget, we could have said “interesting learning, not ready for production yet, let’s revisit in 6 months.” Instead, it was a “failed” production project that damaged team trust.

The Real AI ROI Challenge: Measuring What Matters

Your fraud detection example is great because the ROI is obvious. But here’s what I’m struggling with: How do you measure the ROI of team velocity and code quality improvements from AI?

We implemented an AI code review assistant six months ago. Cost: K in setup plus K monthly. Direct ROI calculation: Unclear.

But here’s what changed:

  • Code review time dropped from 8 hours per engineer per week to 5 hours
  • Bug detection in review improved (we’re catching ~15% more issues before production)
  • Junior engineers are learning faster because the AI explains code patterns

So we’ve saved each engineer 3 hours weekly. That’s 156 hours annually per engineer × 40 engineers = 6,240 hours. At /hour loaded cost, that’s K in reclaimed engineering time.

But here’s the part that doesn’t fit in a spreadsheet: What are engineers doing with those 3 hours? Building features? Or in meetings? We don’t actually know. So can I tell you the feature velocity increased 3 hours worth? No.

The finance team looks at this and says “unclear ROI.” The engineering team says “this is transformative for our workflow.” Who’s right?

What I Wish Finance Understood

You asked what engineering wishes CFOs understood. Here are mine:

  1. The cost of NOT investing in AI is competitive risk: When our competitors ship AI features and we don’t, we lose deals. That lost revenue is the cost of being too conservative. But it never shows up in your ROI models.

  2. Engineering talent is motivated by growth opportunities: The best engineers want to work on cutting-edge problems. If we only build “proven ROI” features, we lose them to companies doing more ambitious work. The talent retention ROI is massive but hard to measure.

  3. Platform investments enable multiple future projects: Michelle is absolutely right about this. When we killed an ML platform initiative last year for ROI reasons, every subsequent AI project took 2-3x longer and cost more because we were reinventing infrastructure each time.

  4. Innovation culture is fragile: Teams that feel like every project needs immediate ROI stop proposing creative solutions. They optimize for what’s measurable, not what’s possible. You get incremental improvements, never breakthroughs.

A Proposal: Shared Language

I think the core issue is finance and engineering don’t have shared language for “value.”

Finance values: Revenue, cost savings, measurable ROI
Engineering values: Capabilities, velocity, technical debt reduction, future optionality

Both are legitimate. But when we talk past each other, we end up in the situation you’re describing - board wants innovation, CFO wants ROI, team is frustrated.

What if we created hybrid metrics that both sides agree on?

Examples:

  • Feature velocity: How many customer-facing features do we ship per quarter? (Engineering cares about velocity, finance cares about customer-facing)
  • Technical debt ratio: What % of engineering time goes to new features vs. fixing/maintaining old code? (Engineering cares about debt, finance cares about efficiency)
  • Platform leverage: How many projects benefit from shared infrastructure? (Engineering cares about platform, finance cares about leverage)

These aren’t perfect, but they give us a language to discuss value that both sides can work with.

Bottom Line

I support your phased ROI approach for discrete AI features. But I’d push for a separate innovation budget (even if it’s just 10-15% of total AI spend) with different evaluation criteria.

That budget is an investment in competitive positioning, talent retention, and future capabilities. The ROI is real, just harder to measure with traditional finance tools.

Otherwise, we’ll be excellent at optimizing current business while competitors build the future.

Carlos, appreciate you bringing the finance perspective to this forum. I’m in a similar position at a Fortune 500 financial services company, so I understand the pressure you’re under.

I want to add a dimension that’s critical in regulated industries like ours: AI ROI needs to include risk mitigation and compliance value, not just revenue and cost savings.

The Risk Mitigation ROI

In financial services, the ROI calculation for AI often comes down to “what does this prevent?” rather than “what does this generate?”

Example: We implemented an AI system for transaction monitoring to detect potential money laundering patterns. The direct cost savings were minimal - we still have human reviewers. But the value proposition is:

  • Regulatory fine avoidance: A single AML violation can cost M-M in fines
  • Reputation protection: If we miss money laundering activity, the brand damage is massive
  • Audit efficiency: Regulators love seeing systematic, AI-augmented controls

How do you calculate the ROI of “we didn’t get fined”? It doesn’t show up as revenue or direct cost savings, but the expected value is enormous.

When I presented this to our CFO, I framed it as: “We spend K annually on this AI system. A single violation fine would be 20-200x that cost. The expected value calculation is heavily in favor of the investment.”

That got approved where pure efficiency plays might not have.

The Competitive Necessity Argument

Your board asking “why aren’t we moving faster on AI?” - in financial services, I flip that into an ROI argument.

The question isn’t “what’s the ROI of implementing AI?” It’s “what’s the cost of our competitors having AI and us not having it?”

We track competitor AI capabilities quarterly. When a major competitor launched AI-powered cash flow forecasting for their business banking clients, we started losing deals. The lost revenue was measurable.

I went to our CFO with: “Competitor X’s AI feature is cited in 30% of our lost deals in the past quarter. Average deal size is K annually. That’s K in lost ARR we can directly attribute to not having this capability.”

Suddenly the K investment to build our own version has clear ROI - we’re preventing further revenue loss and potentially winning back deals.

Phased Approach in Financial Services Context

Your 90-day phased gates make sense, but in financial services, I’ve had to adapt them for regulatory requirements:

Phase 1 (90 days): Pilot with synthetic or anonymized data, prove the model works technically
Phase 2 (90-180 days): Limited production deployment with extensive human oversight
Phase 3 (6-12 months): Scale deployment, but maintain audit trails and explainability

The key difference: We can’t just “kill” a project at Phase 2 if it’s partially deployed in production. Regulatory compliance means we need clean exits and audit trails.

This adds cost and complexity to your phased approach, but it’s non-negotiable in our world.

The ROI Framework That Works for Us

We evaluate AI investments across four dimensions:

  1. Direct Financial ROI: Revenue increase or cost reduction (your 300% hurdle rate)
  2. Risk Reduction: Expected value of prevented losses (fines, fraud, compliance issues)
  3. Competitive Positioning: Revenue impact of having/not having capabilities competitors have
  4. Operational Resilience: Can we continue operating if key processes fail? (Think COVID lockdowns - AI that enables remote operations had massive ROI during the pandemic)

Most AI projects don’t clear the hurdle on dimension 1 alone. But when you add dimensions 2-4, the ROI case becomes much stronger.

Question for Other Finance Leaders

How are you handling the “cost of not doing it” in your ROI models?

Traditional finance teaches us to evaluate what we’re spending. But with AI, the competitive and risk dimensions mean the real question is often: What’s the cost of our competitors doing this and us not doing it?

I don’t have a perfect framework for this, but it’s increasingly the discussion I’m having with our board and CEO.

To Your Specific Questions

On strategic optionality: I argue that in financial services, “optionality” is actually risk management. The ability to quickly respond to regulatory changes or market shifts has real value. When GDPR hit, companies with flexible data infrastructure adapted faster. That flexibility is worth something.

On innovation vs. accountability: I use a barbell strategy - 80% of AI budget goes to proven ROI projects (your phased approach), 20% goes to strategic/platform investments with longer time horizons. The 20% earns its keep by enabling the 80% to succeed faster and cheaper.

On balancing board pressure: I bring customer and competitive data to board meetings. When board members ask “why aren’t we doing more AI?”, I show them: “Here’s what we’re doing, here’s the ROI, here’s how it compares to competitors.” Data-driven board management works.

The key insight for me has been: Finance isn’t the enemy of innovation. Finance is the discipline that helps innovation survive board scrutiny and market downturns. Our job is to make sure the AI investments we make are defensible and valuable, not just exciting.