CFOs Are Deferring 25% of AI Investments Due to ROI Scrutiny—Is This Prudent or Are We Missing the Next Wave?

CFOs Are Deferring 25% of AI Investments Due to ROI Scrutiny—Is This Prudent or Are We Missing the Next Wave?

I just read the 2026 State of FinOps report and one stat jumped out: CFOs are deferring 25% of planned AI investments to 2027 or beyond due to ROI uncertainty.

This is happening at my company too. We just delayed an AI agent project that would have automated parts of our customer success workflow. Finance couldn’t get comfortable with the unit economics. “Show us proven ROI,” they said.

The Tension I’m Feeling

As VP of Product, I’m caught between two valid perspectives:

Finance view: “We spent tens of millions on cloud transformation in 2015-2018. Some worked, some didn’t. We’re not repeating that mistake with AI. Show us the business case first.”

Product view: “AI is evolving so fast that waiting for ‘proven’ ROI means we’ll be 18 months behind competitors. Some bets require faith, not spreadsheets.”

Both sides have historical precedent.

Looking Back: Who Was Right?

Cloud in 2010-2012:

  • Early adopters (Netflix, Airbnb) paid “innovation tax” but gained massive advantages
  • Fast followers (most enterprises) got better economics but lost competitive positioning
  • Late adopters got crushed by digital-first competitors

Blockchain in 2017-2021:

  • Early adopters wasted millions on projects that went nowhere
  • Fast followers saved money by waiting for use cases to emerge
  • Late adopters avoided a costly mistake entirely

So which pattern is AI following? Cloud or blockchain?

What Makes This Hard

Unlike cloud, AI ROI is genuinely uncertain:

  • Value hard to quantify: “AI customer service agent” sounds great, but does it improve NPS? Reduce churn? Increase CSAT? We don’t know yet.
  • Costs unpredictable: Token costs, inference costs, model training—all over the map
  • Technology still evolving: GPT-5 might make our GPT-4 investment obsolete in 6 months
  • Organizational readiness: Do we even have the talent to implement this well?

When I can’t quantify value and can’t predict costs, how do I make a rational business case?

My Specific Dilemma

Our AI project:

  • Estimated cost: $150K-300K annual run rate (wide range, high uncertainty)
  • Estimated value: “10-20% reduction in CS ticket volume” (also uncertain)
  • Estimated effort: 2 engineers for 6 months

CFO’s question: “What if it costs $500K and only reduces tickets 5%? Can we afford to learn that lesson?”

My question: “What if it works and our competitors ship it first? Can we afford that lesson?”

What I’m Wondering

For product and engineering leaders who’ve navigated AI investment decisions:

  • How do you build business cases when ROI is genuinely uncertain?
  • What’s your portfolio approach? (X% proven, Y% experimental?)
  • When do you push back on finance vs accept their constraints?
  • How do you know if you’re being prudent vs being left behind?

I want to be responsible with company resources. But I also don’t want to look back in 2027 and realize we missed a critical technology shift because we demanded too much certainty too soon.

Is CFO scrutiny saving us from waste? Or preventing us from competing?


For context: FinOps teams are shifting focus from reactive cost management to proactive investment decisions, but the AI uncertainty makes this hard.

David, I’ve lived through enough hype cycles to see both sides of this clearly. Let me share what I’ve learned.

Technology Evolution Follows Patterns

You’re asking: Cloud or blockchain? The answer is neither exactly—it’s its own thing.

Here’s what I’ve observed:

Early Adopter Phase (2023-2024): Pay innovation tax, gain learning
Fast Follower Phase (2025-2026): Better economics, proven patterns emerge
Mainstream Phase (2027+): Clear best practices, commoditized tooling

We’re transitioning from early adopter to fast follower right now. That’s why CFOs are pumping the brakes.

Portfolio Approach (What Actually Works)

At my company, we use a three-bucket model:

  • 70% Proven Technology: Cloud infrastructure, established SaaS, things with known ROI
  • 20% Emerging Technology: AI for well-defined use cases, newer platforms with some track record
  • 10% Experimental: Cutting-edge AI, new paradigms, high-risk/high-reward bets

This isn’t arbitrary—it’s based on our risk tolerance and our competitive position.

If you’re a market leader: Maybe 60/25/15 (more experimental)
If you’re catching up: Maybe 80/15/5 (more conservative)

The Key Question CFOs Should Ask

Not “what’s the ROI?” (because you don’t know)

Instead: “What’s the cost of learning?”

Your AI CS project:

  • Cost to learn: $150-300K + 2 engineers for 6 months
  • Cost of not learning: Competitors gain 6-12 month advantage

If $300K is your “tuition” to understand AI CS automation, is that worth it? At scale, probably yes. At a seed-stage startup, maybe not.

When to Push Back on Finance

I push back when finance is asking for certainty that doesn’t exist.

“Show us proven ROI for AI” is like asking in 2011: “Show us proven ROI for mobile apps.” You can’t. The category is too new.

Better question: “What would success look like? What would failure cost? Can we afford to learn?”

When to Accept Finance Constraints

I accept when:

  • Our team isn’t ready (lacking AI talent/capability)
  • We’re in cost-cutting mode (wrong time for experiments)
  • The “learning” isn’t strategic (cool tech, but not relevant to our business)
  • Alternative approaches exist with better ROI certainty

Your CS automation project: Do you have AI engineers? Have you shipped anything with LLMs? If no, maybe the CFO is right—you’d be paying to learn basics that others have already figured out.

My Advice

Don’t fight for “AI investment” broadly. Fight for specific, strategic bets where:

  1. The learning is relevant to your core business
  2. You have capacity to execute well
  3. The cost of learning is acceptable
  4. Failure wouldn’t be catastrophic

And accept that some quarters, finance says no. That’s okay. It’s their job to protect capital.

It’s your job to advocate for growth. The tension is healthy.


This mirrors broader patterns: FinOps in 2026 is about balancing innovation with fiscal discipline, not choosing one over the other.

David, before worrying about ROI, ask a different question: Can your team actually execute on this?

The Execution Risk Nobody Talks About

Everyone’s debating whether AI is the next cloud or the next blockchain. But here’s what I see in my role: Most AI project failures aren’t technology failures. They’re execution failures.

Teams that can’t ship basic features reliably suddenly think they can ship AI agents. It doesn’t work.

Questions to Ask Before Pushing Back on CFO

  1. Do you have AI/ML engineers? Or are you asking backend engineers to learn LLMs on the job?

  2. Have you shipped anything with AI? Or is this your first rodeo?

  3. Do you understand the operational complexity? Model versioning, A/B testing, monitoring hallucinations, managing prompts?

  4. Is your infrastructure ready? Observability, cost tracking, safety rails?

If you answered “no” to most of these, maybe the CFO scrutiny is a blessing.

A Different Perspective on the Deferral

When finance defers 25% of AI investments, I don’t see it as “missing the wave.”

I see it as: Finance buying you time to build capability through smaller projects.

Instead of $300K experimental project with high failure risk:

  • Ship 3 smaller AI features first ($50K each)
  • Build muscle, learn patterns, develop expertise
  • Then tackle the big project when you’re ready

The Talent Reality Check

At my company, we had grand AI ambitions. Then we tried to hire AI engineers. Turns out:

  • They’re expensive ($250K+ for experienced ML engineers)
  • They’re scarce (everyone’s hiring them)
  • They’re picky (won’t join if your AI strategy is unclear)

So we pivoted: Partner with AI platform (OpenAI, Anthropic, Replicate) instead of building from scratch. Use their APIs. Focus on integration, not infrastructure.

Much lower cost. Much faster to market. Learned in production, not in research.

My Advice

Use the CFO deferral as a forcing function to:

  1. Start smaller: What’s the $20K version of your $300K idea?
  2. Build capability: Ship something with AI this quarter, even if small
  3. Prove execution: Show you can ship AI reliably before asking for big budgets
  4. Learn economics: Understand actual token costs, not theoretical projections

Then go back to finance in Q3 with: “We shipped X, Y, Z. We learned A, B, C. Now we’re ready for the big project. Here’s the updated business case.”

That’s a much easier sell than: “Give us $300K to experiment with technology we’ve never used before.”


Related: As teams scale, engineering effectiveness matters more than technical ambition. Capability comes before capital investment.

Coming from the financial services world, I want to add a cautionary tale.

The Blockchain Mistake (That We Made)

2018-2021: Our bank spent $20M on blockchain initiatives. “Strategic imperative” they called it. “Can’t afford to miss this wave.”

What did we get?

  • Consortium memberships that went nowhere
  • Pilots that never reached production
  • “Research” that produced white papers, not revenue
  • A team that eventually disbanded

2022: Entire initiative shut down. $20M written off.

Why It Failed

Looking back, the problem wasn’t blockchain technology. The problem was:

  1. We had no clear use case: “Blockchain for supply chain” sounded smart but didn’t solve a real problem we had

  2. Strategic imperative bypassed scrutiny: Magic words that let projects skip normal ROI requirements

  3. Sunk cost fallacy: “We’ve invested $5M already, can’t stop now” → invested $15M more

  4. No success metrics: “Explore blockchain” had no definition of success or failure

The Result Today

Our company is now EXTREMELY skeptical of AI investments. When product proposes anything with “AI” in the title, finance immediately compares it to blockchain.

That’s not fair to AI (which has clearer use cases than blockchain did). But it’s the reality we created.

What Would Have Prevented This

If finance had asked harder questions in 2018:

  • “What specific problem does this solve?”
  • “How will we measure success?”
  • “What’s our exit criteria if it doesn’t work?”
  • “Who are the customers and what will they pay?”

We might have done smaller experiments with clear success metrics instead of big bets on vague “strategic” value.

My Advice to David

Your CFO’s skepticism might actually be healthy. Instead of fighting it:

Define success criteria upfront:

  • “If CS ticket volume drops 15% within 3 months, we’ll scale it”
  • “If it drops <5%, we’ll kill it”
  • Clear metrics, clear decision points

Start smaller:

  • $50K pilot instead of $300K full build
  • One use case instead of five
  • Prove the concept, then scale

Show, don’t promise:

  • Build a prototype in 4 weeks
  • Show actual results, not projections
  • Let the work speak

If your CS automation actually works, you won’t need to convince finance. The numbers will do it for you.

And if it doesn’t work? You learned for $50K instead of $300K.

That’s prudent, not fearful.

This is hitting way too close to home. My startup failed chasing trendy tech instead of solving customer problems.

My Cautionary Tale

2021-2023: I co-founded a B2B SaaS startup. We built an “AI-powered design tool” because:

  • AI was hot
  • VCs loved it
  • It sounded impressive

What we didn’t do:

  • Talk to customers about whether they wanted AI features
  • Validate that AI solved a real problem
  • Consider whether simpler solutions would work

Result: Burned through $2M, shut down in 2023.

The Brutal Truth

We failed because we built what was EXCITING, not what was NEEDED.

Customers wanted: Better collaboration features, faster exports, more integrations

We built: AI-generated design suggestions (that were usually wrong)

They didn’t care about AI. We did. We failed.

What Your CFO Might Be Protecting You From

When finance says “show us ROI,” they’re really asking: “Do customers actually want this?”

And honestly? That’s a question you should answer before writing any code.

David, your CS automation project:

  • Have you talked to CS reps about their biggest pain points?
  • Is “too many tickets” actually the problem? Or is it “tickets take too long to resolve”?
  • Would $300K be better spent hiring 2 more CS reps?
  • Have you validated that customers prefer AI responses over human responses?

If you can’t answer these questions confidently, your CFO is doing you a favor by making you slow down.

Counter-Point to the Innovation Narrative

Everyone’s afraid of “missing the wave” and “being left behind.”

But you know what’s worse than being left behind? Spending a fortune on something nobody wants.

My startup died because we were so afraid of missing AI that we forgot to build something people would pay for.

My Rule Now

Before building ANYTHING (AI or otherwise):

If you can’t articulate customer value in one sentence, don’t build it.

“CS automation reduces ticket volume 20%” → Okay, but do customers care? Do CS reps want this?

“CS automation lets our team handle 2x more customers without hiring” → Now we’re talking business value

“Customers get answers in 30 seconds instead of 2 hours” → Even better—customer-facing value

If you can’t nail the one-sentence value prop, you’re building for yourself, not for customers.

Sometimes Constraints Save You

I know it feels frustrating when finance blocks things. But looking back, I wish my startup had someone asking these hard questions:

  • Who’s the customer?
  • What’s the problem?
  • Why is this solution better than alternatives?
  • What will they pay?

We had no one asking those questions. We built whatever seemed cool. We failed.

Your CFO is asking those questions. That might be annoying. But it might also save you from wasting months building something that doesn’t matter.