CFOs Are Deferring 25% of AI Spend to 2027. As a CTO, I'm Both Relieved and Concerned

CFOs Are Deferring 25% of AI Spend to 2027. As a CTO, I’m Both Relieved and Concerned.

I had a complicated reaction when I saw Forrester’s prediction that enterprises will defer a quarter of their planned AI spend into 2027. Relief, because it validates what many of us have been experiencing behind closed doors. Concern, because I worry about what this means for genuine innovation in our industry.

The CFO Pressure Is Real (And Justified)

Let me be direct: the pressure from finance teams is warranted. Only 14% of CFOs see clear, measurable ROI from AI investments to date. When 61% of CEOs feel increased pressure to prove returns compared to a year ago—and 53% of investors expect ROI in just six months—we’re not dealing with unreasonable expectations. We’re dealing with the natural correction after a period of exuberant spending.

At our last board meeting, our CFO presented data showing we’d allocated nearly 3% of our engineering budget to AI tooling over the past 18 months. When pressed on what we got for that investment, I struggled to provide concrete numbers. That conversation was humbling.

The Engineering Reality: Do More With Less (Again)

Here’s what makes this particularly challenging: headcount growth expectations have collapsed from 6% last year to just 2% for 2026. Meanwhile, 75% of CFOs are actually increasing tech spend, with nearly half planning double-digit hikes.

Read that again. We’re being asked to deliver AI-driven productivity gains while our ability to hire has nearly flatlined.

The measurement problem compounds this. According to recent surveys, 86% of engineering leaders can’t confidently identify which AI tools are providing the most benefit. How can we justify spend when we can’t measure impact? How can we defend AI budgets when we can’t point to clear wins?

The Innovation Paradox

Here’s what keeps me up at night: 74% of CEOs say short-term ROI pressure undermines long-term innovation. I see this tension play out every quarter. Our CFO (rightly) demands accountability for AI spending. Our team needs space to experiment with emerging capabilities. These aren’t easily reconciled.

The reality of AI infrastructure investment makes this worse. For every $1 we spend on AI tools, we need roughly $20 in data architecture and infrastructure. That’s capital that doesn’t show immediate productivity gains. That’s investment that pays off over years, not quarters.

When CFOs are modeling 6-month payback periods, how do we make the case for multi-year platform investments?

Where I Land

I’m actually hopeful that this deferral represents maturation, not retreat. Maybe we needed the hype cycle to fund exploration. Maybe we need the accountability cycle to drive genuine value creation.

But we need to get better at measurement. We need to get better at separating “AI infrastructure” (long-term ROI) from “AI tooling” (short-term productivity). We need to get better at helping our finance partners understand innovation economics.

The organizations that use 2026 to build foundations—data infrastructure, measurement culture, organizational readiness—will be positioned to accelerate when the market matures in 2027.

Question for fellow CTOs and engineering leaders: How are you handling CFO conversations about AI ROI? What frameworks are you using to balance experimentation with accountability? And how are you measuring impact in ways that finance teams actually trust?

I’m genuinely curious how others are navigating this tension.


Note: Forrester’s 2026 predictions on AI spending deferrals and the various ROI statistics cited reflect broader industry trends reported by Fortune, CIO, and multiple research firms tracking enterprise AI adoption.

Michelle, this resonates deeply. I’m navigating similar conversations in financial services, and I’ve found that the ROI framing needs to expand beyond pure productivity metrics.

In regulated industries, we’re measuring AI ROI on dimensions that CFOs initially don’t consider. When I pitched our fraud detection AI initiative, the CFO’s first reaction was skepticism about the $2.8M investment. What changed the conversation was reframing it: “What’s the cost of not having this capability?”

In our case:

  • Compliance cost avoidance: Manual review processes were scaling poorly, risked regulatory penalties
  • Risk reduction: False positives were degrading customer experience, false negatives exposed us to fraud losses
  • Competitive necessity: Our competitors were deploying similar capabilities—we risked selection adverse customers

The CFO approved a multi-year business case, not a 6-month payback model. That’s critical. We’re 18 months into deployment, and we’re finally seeing the returns. Under a 6-month ROI lens, this would have been killed.

To your question about measurement frameworks: I’ve started separating AI investments into three buckets with different ROI expectations:

  1. Tactical AI (developer tools, productivity): 3-6 month ROI, measured in velocity/output
  2. Strategic AI (infrastructure, platforms): 18-24 month ROI, measured in capability enablement
  3. Compliance AI (risk, regulatory): Multi-year ROI, measured in cost avoidance and risk reduction

The key is educating CFOs that different investment types need different measurement timeframes. In financial services, “compliance AI” often has negative ROI in productivity terms but massive ROI in risk avoidance.

Counter-question: How do we measure ROI on foundational infrastructure that enables future innovation? Our data lake investment didn’t deliver productivity gains directly—it enabled AI use cases we couldn’t have built otherwise. How do you account for that in CFO conversations?

The $1:$20 infrastructure ratio you mentioned is particularly painful in our world. We’re having to rebuild legacy data systems before we can even think about modern AI applications.

Michelle, I appreciate your honesty here, but I want to challenge the framing slightly. Are we really deferring AI spend, or are we finally getting strategic about it?

From a product perspective, I think the issue is that “AI investment” has become this catch-all category that conflates three fundamentally different things:

  1. AI tooling (GitHub Copilot, coding assistants) - should show immediate productivity ROI
  2. AI infrastructure (data platforms, ML ops) - platform investment with multi-year payback
  3. AI product features (customer-facing capabilities) - should drive revenue or retention

When CFOs see “AI budget” as one line item, they’re comparing tools that should pay back in weeks with platforms that take years. No wonder only 14% can measure clear ROI—we’re measuring the wrong things.

Here’s what we did differently: Last quarter, we paused our AI experimentation budget entirely. Sounds extreme, but hear me out. We had 12 different AI initiatives running. We couldn’t measure impact on any of them. Classic “let a thousand flowers bloom” paralysis.

Instead, we doubled down on 3 high-ROI use cases:

  • AI-powered customer support routing (measurable: ticket resolution time)
  • Automated financial report generation (measurable: finance team hours saved)
  • Intelligent feature recommendations (measurable: activation rate lift)

Result: We can now show our CFO a spreadsheet with actual ROI numbers. Those three use cases returned 4.2x in the first 90 days. That earned us credibility to restart the experimentation budget, but with much tighter hypotheses.

To your question about measurement frameworks: I don’t think engineering should own AI ROI measurement alone. This is a cross-functional responsibility:

  • Engineering measures: build velocity, deployment frequency, infrastructure cost
  • Product measures: feature adoption, user satisfaction, business metrics
  • Finance measures: cost avoidance, revenue impact, efficiency gains

The 86% who can’t measure ROI aren’t just missing data—they’re missing shared definitions of what success looks like.

The tension you describe between CFO accountability and team experimentation is real, but I’d argue it’s revealing dysfunction in how we define value. When Product, Engineering, and Finance don’t agree on what we’re measuring, no amount of AI tooling will show clear ROI.

Question back to you and Luis: Should we create an “AI Portfolio Review” meeting that brings together Engineering, Product, and Finance quarterly to align on measurement and adjust investments? Or does adding more governance just slow us down further?

Michelle, thank you for opening up about this. The vulnerability in sharing your board meeting experience matters—these conversations are happening everywhere, but we’re not talking about them enough.

David’s right that we need better categorization, and Luis’s compliance angle is crucial. But I want to push on something deeper: the ROI pressure is revealing a dysfunctional measurement culture, not just an AI problem.

At my previous company, we had this exact conversation about cloud migration in 2019. CFOs wanted 6-month ROI proof. Engineering knew it was multi-year infrastructure investment. Sound familiar? The organizations that succeeded didn’t just measure better—they changed how leadership thought about technology investment.

Here’s what I’ve implemented at our EdTech startup, and it’s working:

Quarterly “AI ROI Review” with CFO in the room

Not a report-out. An actual working session where:

  • Engineers present adoption and impact data
  • Product shows business metric movement
  • CFO shares what evidence would change their investment decisions
  • We align on what “value” means for each initiative

The first session was uncomfortable. Our CFO didn’t understand why data infrastructure “costs money but doesn’t produce features.” Our engineers didn’t understand why CFO cared about $/transaction when we were optimizing for developer velocity.

The breakthrough: We stopped talking past each other and started defining shared metrics.

The Framework That’s Working

I’ve separated AI investments into two categories, using language Finance actually understands:

1. AI Infrastructure (CapEx mindset)

  • Multi-year ROI horizon
  • Measured by: capability enablement, reduced future cost, platform leverage
  • Treated like buying a factory—you don’t expect immediate payback

2. AI Tooling (OpEx mindset)

  • Quarterly impact expectations
  • Measured by: productivity gains, cost savings, time-to-market improvement
  • Treated like hiring a contractor—you expect immediate value

This framing clicked for our CFO because it maps to how they already think about capital vs operational spending.

But here’s where I differ from David: I don’t think the problem is just that “86% can’t measure.” I think our measurement frameworks are optimizing for the wrong things.

AI productivity tools are typically measured on output quantity (tickets closed, code shipped, features delivered). But what about:

  • Code quality and maintainability?
  • Team sustainability and burnout prevention?
  • Innovation time freed up by automation?

When we only measure quantity, we miss the qualitative value that might matter more long-term.

The Inclusion Angle

Here’s something that doesn’t get discussed enough: how AI ROI pressure affects team diversity and growth.

When CFOs trade headcount budget for AI tooling budget (happening at a shocking rate), we stop hiring. Hiring freezes disproportionately impact underrepresented groups trying to break into tech. We’re optimizing for short-term efficiency at the expense of long-term inclusive excellence.

Michelle, you mentioned headcount growth dropping from 6% to 2%. That’s not just a number—that’s fewer opportunities for diverse talent. That’s less mentorship capacity. That’s reduced organizational learning.

My question for everyone: How do we create measurement frameworks that encourage learning and innovation, not just justification and defensiveness?

And how do we ensure that ROI conversations don’t inadvertently reinforce homogeneous, efficiency-obsessed cultures at the expense of inclusive, innovative ones?


P.S. David, I love the quarterly AI Portfolio Review idea. We should definitely do more governance—but governance that creates alignment, not bureaucracy.

This whole thread is refreshing. Like, genuinely refreshing. It’s rare to see such honest conversations about what’s actually working (and not working) with AI investments.

From my perspective as someone who builds things (not manages budgets), I have mixed feelings about the whole “25% deferral” narrative.

What I’m Actually Experiencing

I’ve been using AI coding tools heavily for the past year. GitHub Copilot, Cursor, some newer tools I won’t name. The productivity gains are real—but they’re also really hard to quantify in the way CFOs want.

Example: Last month I shipped a complex design system component that would’ve taken me 3-4 days. With AI assistance, it took about 1.5 days. That’s a clear win, right?

But here’s the thing: I also spent 45 minutes debugging hallucinated CSS that Copilot confidently suggested. I spent another hour learning how to prompt effectively for the specific framework we use. The “1.5 days” doesn’t capture the messy reality.

Half the “AI productivity” in my work is basically fancy autocomplete. The other half is genuinely transformative. How do you put that in a CFO spreadsheet?

The Individual Contributor Concern

Michelle, you mentioned the measurement problem—86% of engineering leaders can’t identify which tools provide value. From down here in the trenches, I can tell you why: the value is highly individual and context-dependent.

Copilot is amazing for boilerplate React components. It’s terrible for our custom internal frameworks. It saves me hours on some tasks and costs me time on others. My teammate has the opposite experience because they work on different parts of the stack.

If leadership can’t measure aggregate impact, how would they even know which tools to keep and which to cut when deferrals happen?

Are We Deferring the Right Things?

Keisha’s point about headcount vs tooling trade-offs hits home. I used to spend ~20% of my time mentoring junior designers on our team. That was professionally fulfilling and important for team growth.

Now we don’t have junior designers (hiring freeze). I spend that time… wrangling AI tools and trying to document design systems well enough that AI can help future hires ramp up faster.

Is that better? Worse? Just different? I honestly don’t know.

The Side Project Perspective

Here’s what’s interesting: On my side projects, I use AI tools aggressively. No compliance concerns, no security review, no ROI justification needed. Just “does this help me ship?”

And it absolutely does. I’ve built and shipped three small web apps in the past 6 months that would’ve taken me a year without AI assistance.

But at my day job? We can barely use any of this stuff. Legal review, security assessment, data privacy concerns, procurement process—by the time we get approval, there’s a newer/better tool we should evaluate instead.

My question: Are we deferring investment in AI capabilities (learning, skilling up, building AI literacy) or just deferring the hype-driven spending (buying every new tool that promises 10x productivity)?

Because if it’s the former, that’s concerning. If it’s the latter, that’s probably healthy.

What I Hope 2027 Brings

Maybe the deferral gives vendors time to build more mature, enterprise-ready tools. Maybe it gives organizations time to figure out data governance and security. Maybe it gives us all time to separate what’s genuinely valuable from what’s just shiny.

I’m cautiously optimistic. The AI tools I use today are noticeably better than what was available a year ago. If that improvement curve continues while enterprises build better foundations, 2027 might actually be the year things click.

Just please don’t let the deferral become an excuse to stop learning and experimenting at the individual level. That’s where the real productivity gains are happening.