I need to share something that’s been keeping me up at night as VP Product.
Six months ago, I pitched our leadership team on AI coding assistants. The data was compelling: developers would save 3-6 hours per week. We’d ship faster, reduce cycle times, unlock velocity. I got budget approval. The team adopted the tools enthusiastically—we’re at 80%+ adoption now.
Here’s what I didn’t expect: our sprint velocity is exactly the same. Our deployment frequency hasn’t changed. We’re shipping the same number of features per quarter.
Where are those 180 hours per month going?
The Business Reality
Our CFO is asking hard questions about ROI. We invested in tools, training, and process changes. Developers genuinely feel more productive—the surveys confirm it. But when we look at delivery metrics: cycle time, features shipped, time-to-market… nothing moved.
We’re all coding faster but shipping the same.
The Bottlenecks I’m Seeing
From my seat, I see four major friction points:
1. Review queues exploding
Developers are writing more code, but our reviewers are overwhelmed. PRs are bigger, more frequent, and taking longer to review. The bottleneck shifted from writing to reviewing.
2. Quality gates catching more
Security scans, automated tests, manual QA—all catching more issues. We’re generating code faster, but also generating bugs faster. Our QA team feels like they’re drinking from a firehose.
3. Planning unchanged
We didn’t adjust our sprint planning, story sizing, or roadmap processes. We’re executing tasks faster but not capitalizing on that speed. The product planning cycle is still the same.
4. Coordination tax
More code means more merge conflicts, more integration issues, more time in sync meetings. The soft costs of increased output are real.
The Product Manager’s Dilemma
So what do we do?
- Hire more reviewers? Not sustainable, and doesn’t scale
- Lower our quality bar? Absolutely not—technical debt is already a concern
- Change our processes? Yes, but where do we start?
- Accept this is the new normal? Hard to justify to finance when the promise was productivity gains
What I’m Learning
Individual productivity is different from organizational productivity. We optimized for individual output without thinking about the system. It’s like making one assembly line station faster—the whole factory still moves at the same speed.
The research backs this up. A recent study showed that while developers save 3.6 hours/week individually, organizations see 0-10% delivery improvement at the system level. Teams with high AI adoption complete 21% more tasks and merge 98% more PRs, but PR review time increases 91%. (Source)
We shifted the bottleneck, we didn’t eliminate it.
Questions for This Community
I know there’s deep engineering, design, and leadership expertise here. I’m hoping to learn from your experiences:
-
What organizational changes did you make to actually capture AI productivity gains at the team/company level?
-
Is this a people problem, process problem, or tool problem? Or all three?
-
Should we measure individual productivity differently now? Are our metrics lying to us?
-
Anyone else facing CFO pressure on AI tool ROI? How are you demonstrating value when velocity metrics are flat?
I’m curious if this resonates with others, or if we’re doing something fundamentally wrong in how we adopted these tools.
Cross-posted from my reflections on Product-Market Fit vs Execution Speed. Would love to hear the tianpan community’s perspective.