The AI ROI Reckoning: Are We in a Correction, or Just Getting Started?

Last week, my CFO walked into my office and asked the question I’d been dreading: “Michelle, when does our AI investment start making money?”

I didn’t have a good answer. And based on the data, I’m not alone.

The Numbers Don’t Lie

According to PwC’s 2026 survey, 56% of companies report zero financial return from AI investments. Even more sobering: MIT research found that 95% of enterprises see no measurable impact on profits despite collectively investing $35-40 billion in AI initiatives.

The AI productivity paradox is real: 80%+ of companies report no productivity gains despite billions invested. And here’s the kicker—when AI tools do “save time,” 37-40% of those savings get consumed by reviewing, correcting, and verifying AI-generated output.

So are we in an AI correction? I don’t think so. I think we’re in an AI maturation—and it’s separating the hype from the value.

The Problem: We’re Measuring the Wrong Things

Time saved ≠ value created. Just because an AI tool completes a task 50% faster doesn’t mean we’re delivering 50% more value to customers. In many cases, we’re just doing the same work faster and filling the saved time with lower-value activities.

I’ve seen this pattern across our engineering, product, and customer success teams. AI tools promise productivity gains, but we haven’t restructured work to actually capture that productivity as business value.

What’s Actually Working

The companies seeing real ROI aren’t using “AI everywhere” strategies. They’re focusing on narrow, specific use cases where:

  1. The input and output are well-defined
  2. The cost of errors is manageable
  3. The savings are measurable in dollars, not minutes
  4. Human expertise augments, not validates, the AI

For example, our AI-powered customer support ticket routing has a clear ROI: 23% faster resolution times → 15% improvement in CSAT → measurable reduction in churn. We can trace the value chain from AI to revenue.

But our AI coding assistants? Developers love them. Adoption is 90%+. But I can’t draw a straight line from “developers write code faster” to “we ship more valuable features” to “customers pay us more.”

How I’m Defending the Budget

I’ve changed how we talk about AI investments with our CFO. Instead of “productivity gains” and “time savings,” we’re using three metrics:

  1. Revenue impact: Can we trace this AI investment to customer acquisition, retention, or expansion?
  2. Risk reduction: Does this AI prevent errors, compliance issues, or security vulnerabilities?
  3. Strategic enablement: Does this AI unlock capabilities we couldn’t offer before?

If an AI investment doesn’t clearly map to one of these three, we cut it. This framework helped us reduce AI spend by 30% while protecting the initiatives that matter.

The Real Question

61% of CEOs are facing pressure to show AI ROI. Half of organizations in financial services and healthcare are deferring planned AI outlays. The 2026-2030 period is the crucial test for AI commercialization.

So I’m curious: What metrics are you using to prove AI value to your CFO? Are you seeing this same productivity-without-profit paradox? Or have you cracked the code on AI ROI?

Because right now, I feel like we’re all trying to justify AI spend with better stories instead of better data. And I’m not sure that’s sustainable.


Looking forward to hearing how other tech leaders are navigating this.

I really appreciate this framing, Michelle—maturation vs. correction is exactly right. This isn’t a bubble popping; it’s the market figuring out what AI is actually good for.

Our AI Adoption Reality

My team has 90%+ adoption of AI coding assistants. Developers swear by them. But when I looked at the data last quarter, here’s what I found:

  • :white_check_mark: Velocity increased 15% (more PRs merged per sprint)
  • :cross_mark: Bug density increased 20% (more defects per 1000 lines of code)
  • :warning: Code review time increased 50% for senior engineers

That third point is the hidden cost nobody talks about. Our most experienced engineers—the ones we need designing systems and mentoring juniors—are now spending half their time reviewing AI-generated code that looks correct but has subtle logic errors or security issues.

The Junior Developer Problem

Here’s what keeps me up at night: Our junior developers are getting really good at prompting AI and really bad at debugging.

When something breaks, they ask the AI to fix it. When the fix doesn’t work, they ask again. They’re productive on day one—which looks great to our CFO—but 18 months in, they haven’t built the foundational problem-solving skills that make senior engineers valuable.

The AI skills paradox is real: assistants speed up work but may prevent mastery. I’m genuinely worried we’re creating a generation of developers who can’t function without AI guardrails.

How We’re Adapting

I’ve started treating AI tools differently based on experience level:

For senior engineers (5+ years):

  • AI as an accelerator :white_check_mark: Use it to move faster on well-understood problems
  • AI for exploration :white_check_mark: Prototype new approaches quickly
  • AI for grunt work :white_check_mark: Boilerplate, test generation, documentation

For junior engineers (<2 years):

  • AI with training wheels :warning: Required to explain AI-generated code in code review
  • AI-free zones :warning: Core features must be built without AI assistance first
  • Debugging without AI :warning: Learn to read stack traces and use debuggers, not prompts

It’s slowing down our short-term velocity, but I think it’s the right long-term investment.

The ROI Question

To your question about metrics, Michelle—I’m using capability development as an AI ROI metric alongside revenue and efficiency. Specifically:

  • Time-to-productivity for new hires (AI should accelerate onboarding)
  • Promotion readiness (AI shouldn’t delay skill development)
  • Bus factor reduction (AI should help spread knowledge, not concentrate it)

If AI tools help junior devs ship code but prevent them from getting promoted to senior roles, that’s not ROI—that’s technical debt in human capital form.

Question for the group: How are others balancing short-term productivity gains with long-term capability building? Or am I overthinking this and the market will adapt?

I appreciate the optimism in this thread, but I’m going to be the skeptic here.

“AI Correction” Might Be Too Generous

Michelle, you’re calling this a maturation. Keisha, you’re seeing it as market adaptation. I think we might be witnessing a bubble pop, and we’re all trying to rebrand it as something more palatable.

Let me share some hard numbers from my world.

Our AI Investment Reality

Total AI spend in 2025: $180,000

  • GitHub Copilot for 40 developers
  • ChatGPT Enterprise
  • AI-powered code analysis tools
  • AI meeting summarization
  • Productivity tracking with AI insights

Revenue impact we can measure: $0
Cost reduction we can measure: $0
Developer satisfaction improvement: Subjective and probably inflated

I sat in a budget review two weeks ago, and my CFO asked me point-blank: “Luis, can you point to a single dollar of measurable return from our AI investments?”

I couldn’t.

The Vendor Problem

Every AI vendor promises 30-50% productivity gains. None of them deliver measurable value we can actually capture. It reminds me of the early cloud migration days when every vendor claimed “70% cost reduction” and “infinite scalability”—and then reality hit.

Here’s a specific example from our team:

  • AI code completion saves: ~10 minutes per developer per day
  • Debugging AI-generated code costs: ~2 hours per week per senior engineer
  • Net productivity impact: Negative

One of my senior architects told me last week: “I spend more time explaining to the AI what I want than I would spend just writing the code myself. And then I have to fix what it gives me anyway.”

The Q2 Budget Reckoning

My CFO is questioning all AI spend for Q2. Not just the marginal tools—everything. And honestly? I don’t have a strong defense.

We’re not seeing:
:cross_mark: Faster time-to-market
:cross_mark: Higher quality releases
:cross_mark: Reduced support tickets
:cross_mark: Better customer satisfaction
:cross_mark: Lower operational costs

What we are seeing:

  • Developers who feel more productive (but aren’t shipping more value)
  • Meetings that get summarized (but not shorter or more effective)
  • Code that gets written faster (but takes longer to review and debug)

The Uncomfortable Question

Michelle asked what metrics we’re using to prove AI value. Here’s my counter-question:

Is anyone actually getting ROI, or are we all pretending?

I’m genuinely asking. Because from where I’m sitting, this feels less like “separating hype from value” and more like “collectively agreeing to ignore the data because we don’t want to admit we made a bad bet.”

Maybe I’m wrong. Maybe we’re just measuring the wrong things, like you said. But 95% of enterprises seeing no profit impact suggests this isn’t a measurement problem—it’s a value problem.

To the other leaders here: If you’ve actually achieved measurable AI ROI—not “developers feel faster” but “revenue increased” or “costs decreased”—I’d love to hear specifics. Because I need to either defend this budget or cut it.

Luis, I hear your frustration—and I think you’re asking the right questions. But I’m going to push back on the framing.

The ROI Problem Is a Definition Problem

The reason 95% of enterprises see no profit impact isn’t because AI doesn’t create value. It’s because we’re defining and measuring ROI incorrectly.

Let me offer a different framework.

Two Types of AI Investments

Type 1: Operational AI (Cost Reduction)
Goal: Do the same work faster/cheaper
Expected ROI timeline: 3-6 months
Examples: Code completion, meeting summaries, automated testing

Type 2: Strategic AI (Revenue Generation)
Goal: Do new things that create customer value
Expected ROI timeline: 12-24 months
Examples: AI-powered features, predictive analytics, personalization

Most companies—including Luis’s, I suspect—are investing in Type 1 but measuring it like Type 2, or vice versa. This creates a measurement mismatch that makes everything look like failure.

Our AI Success Story

Here’s a concrete example from our fintech product:

Investment: $120K in AI-powered customer insights platform
Timeline: 8 months from pilot to production
Result: $2M in new revenue from upsells identified by AI

But here’s the key: Our CFO almost killed this project at month 4 because we had spent $80K and had “no ROI.” The value didn’t show up until month 7 when sales started closing the AI-identified opportunities.

If we’d measured this with a 6-month ROI window, it would look like a failure. With an 18-month window, it’s our best-performing product investment of 2025.

The Patience Gap

CFOs want ROI in 6 months. AI value often compounds over 18-24 months. This is the fundamental tension.

According to Fortune’s CFO survey, 53% of investors expect positive AI ROI in six months or less. But the reality is that translating AI capabilities into fully automated business processes is far more complex than most assume.

Companies are treating AI investments like SaaS tools (immediate productivity gains) when they should be treating them like R&D investments (long-term capability building).

How We Bifurcated Our AI Budget

I convinced our CFO to split our AI budget into two categories:

Operational AI Budget (~60% of total):

  • Short ROI horizon (6 months)
  • Measured by cost reduction and efficiency
  • Easy to cut if not delivering
  • Examples: Developer tools, automation, summarization

Strategic AI Budget (~40% of total):

  • Long ROI horizon (18 months)
  • Measured by revenue impact and new capabilities
  • Protected from quarterly budget cuts
  • Examples: AI features, customer intelligence, predictive analytics

This framework let us cut the tools that weren’t working (Luis, I’d kill your meeting summarization spend immediately) while protecting the investments that need time to mature.

Three Metrics That Save AI Budgets

To answer Michelle’s original question, here are the three metrics I use to justify AI spend to our CFO:

1. Customer-facing AI (Revenue):

  • ARR generated by AI-powered features
  • Upsell revenue from AI insights
  • Churn prevented by AI-driven interventions

2. Risk-reduction AI (Cost Avoidance):

  • Fraud prevented
  • Compliance violations avoided
  • Security incidents caught early

3. Strategic enablement AI (Optionality):

  • New markets we can enter with AI capabilities
  • Competitive moats created by AI
  • Talent we can attract with AI tooling

If an AI investment doesn’t clearly map to one of these three categories with a defined timeline, we don’t fund it.

The Hard Truth

Luis is right that most AI investments aren’t delivering ROI. But I think the solution isn’t to abandon AI—it’s to get much more selective about which AI bets we make and how we measure them.

The correction isn’t “AI doesn’t work.” It’s “AI works for specific things, and we need to stop pretending it works for everything.”

As someone who’s failed a startup, this whole conversation feels deeply familiar.

Hype → Reality → Correction → “We were measuring the wrong things!” → Pivot

I’ve lived this cycle. It’s not fun. But it’s also how innovation actually works.

The Designer’s Perspective on AI ROI

Here’s my experience with AI tools as a design systems lead:

AI makes me 3x faster at executing ideas.
AI makes me 0x better at having ideas.

Let me explain what I mean.

The Hidden Value Problem

Last month, I used AI to generate 50 design variations for a new component in under an hour. Showed them to my team. We ended up choosing the one I had sketched by hand in 10 minutes before the AI session even started.

The AI saved me time on a task that turned out to not be valuable. I optimized the wrong part of the process.

And I think that’s what’s happening with AI ROI across the industry.

We’re using AI to go faster on things that didn’t need to be faster.
We’re not using AI to do things we couldn’t do before.

The Creative Struggle Question

Here’s a philosophical question I keep coming back to: If AI removes all friction from work, do we lose the creative struggle that produces insight?

Some of my best design ideas came from being stuck—from trying 10 approaches that didn’t work and finally understanding the problem well enough to find one that did.

When AI gives me 50 variations instantly, I skip that learning process. I might get an okay solution faster, but I don’t develop the deeper understanding that leads to breakthrough work.

The AI skills paradox research that Keisha mentioned? I think it applies beyond coding. It might apply to any creative work.

Why “No ROI” Might Be the Market Speaking

Luis asked if we’re all pretending AI works when it doesn’t. I think the answer is more nuanced.

AI absolutely works—at making us faster at execution.
But execution was never the bottleneck.

The bottleneck is:

  • Understanding what problem to solve (product)
  • Designing the right solution (architecture)
  • Coordinating across teams (process)
  • Making tradeoffs between speed and quality (judgment)

AI doesn’t really help with any of those things. So when we invest in AI and don’t see ROI, maybe that’s the market telling us: You’re automating the wrong parts of the value chain.

What the Correction Might Mean

David’s framework about operational vs. strategic AI resonates with me. But I’d add a third category:

Augmentation AI: Tools that make humans better at human things (creativity, judgment, insight) vs. Replacement AI: Tools that do tasks humans used to do (coding, writing, designing)

I think the correction will show us that Augmentation AI has ROI (because it makes valuable humans more valuable) while Replacement AI often doesn’t (because it optimizes less valuable tasks).

My Hope for This Correction

I actually think this “no ROI” phase is healthy. It’s forcing us to ask:

  • What work is actually valuable?
  • What parts of that work should AI handle vs. augment vs. stay away from?
  • What skills do we need to build in humans that AI can’t replace?

The companies that figure this out will thrive. The ones that just keep throwing money at “AI everywhere” strategies will keep reporting zero ROI.

To Michelle’s original question: The metric I’d use to prove AI value is impact per hour of human attention—not just “time saved” but “value created per unit of focused human work.”

Because at the end of the day, human attention and creativity are the scarce resources. If AI isn’t making those more valuable, what’s the point?