I’ve been watching CFO skepticism build for 18 months now—first through quarterly finance reviews, then through increasingly pointed questions about AI spend at board meetings. Last month, our CFO asked me point-blank: “Michelle, where’s the return on our $2M AI infrastructure investment?” I had velocity metrics, developer satisfaction scores, and pilot success stories. What I didn’t have was a clear line to revenue or margin improvement.
Turns out, we’re not alone. Forrester predicts enterprises will defer 25% of planned AI spend to 2027 as CFOs demand proof of tangible returns. Only 15% of AI decision-makers report actual earnings increases from their investments, and fewer than one-third can link AI value to financial growth at all. The era of “let’s experiment and see what happens” is ending—61% of CEOs report increasing pressure to show AI ROI compared to a year ago.
Is This the Burst or the Maturation?
Here’s the uncomfortable question we need to ask ourselves: Is the AI investment bubble deflating, or is this actually healthy maturation from “AI for AI’s sake” to strategic deployment?
I’ve been in tech long enough to recognize the pattern. We’ve seen it with mobile-first, cloud migration, and microservices—initial enthusiasm, over-investment in proofs of concept that never ship, then a correction phase where only genuine value survives. The difference with AI is the velocity of capital involved: Gartner expects AI software spending to triple to $270B in 2026, yet 95% of enterprise AI initiatives are reportedly failing.
The Pilot Purgatory Problem
From where I sit as CTO, the ROI challenge isn’t that AI doesn’t work—it’s that we’re terrible at moving from pilot to production. Too many organizations (including mine, historically) have collections of successful proofs-of-concept gathering dust because no one wants to do the hard work of:
- Integrating AI into actual business workflows (not lab environments)
- Retraining teams to trust and use AI outputs
- Rearchitecting processes to leverage AI capabilities
- Building governance frameworks for AI in production
- Measuring actual business impact, not just technical metrics
CFO-led financial rigor isn’t the enemy—it’s forcing engineering leaders to ask better questions. What changes when you must defend AI spend to the board instead of just engineering leadership? You stop measuring “hours saved” and start measuring workflow transformation. You stop tracking productivity gains and start tracking quality improvements. You tie AI investments to business KPIs, not engineering metrics.
Maybe 95% Should Fail
Here’s my contrarian take: Maybe 95% of enterprise AI initiatives should fail if they can’t articulate business value. The problem isn’t that CFOs are killing innovation—it’s that we greenlighted too many projects that had no path to defensible ROI.
The data supports this: Over 50% of companies report no measurable value yet from AI investments. Enterprise GenAI implementation exceeds 80%, yet fewer than 35% deliver board-defensible ROI. That’s not a measurement problem—that’s a prioritization problem.
The Strategic Question
74% of CEOs say short-term ROI pressure undermines long-term innovation. I get the concern, but I also think it conflates two different issues:
- Exploratory AI research (strategic bets, option value, learning investments) deserves longer runways and different success metrics
- Operational AI deployments (efficiency gains, cost reduction, revenue enablement) should absolutely face ROI scrutiny within 12-18 months
The mistake is treating everything as category one when defending budgets, then measuring everything as category two when reporting results.
What I’m Doing Differently in 2026
After that CFO conversation, here’s how we’re reframing our AI strategy:
Stop measuring: Hours saved, lines of code generated, developer satisfaction scores
Start measuring: Customer retention impact, support ticket resolution time, sales cycle compression, margin improvement
Stop building: AI features because they’re cool or because competitors have them
Start building: AI capabilities tied to specific business outcomes with defined success metrics upfront
Stop treating: All AI spend as “innovation budget” exempt from normal ROI expectations
Start separating: Exploratory AI (10-15% of budget, longer timeline) from operational AI (85-90% of budget, standard ROI gates)
The Real Question
So here’s what I’m wrestling with, and I’d love this community’s perspective:
Are we deferring AI spend because AI doesn’t work, or because we’re bad at choosing what to build and how to measure success?
Is this the deflation of a hype bubble, or the maturation from experimentation to execution? Because those require very different strategic responses, and I think 2026 is the year we have to pick a lane.
What are you seeing at your organizations? How are you navigating the CFO-CTO tension on AI investment? And honestly—how many of your AI “successes” could survive board-level ROI scrutiny?