When my team started using AI coding assistants six months ago, our individual developer velocity metrics looked incredible. Pull request frequency shot up. Lines of code per sprint doubled. Engineers were thrilled—they felt like superheroes writing code at lightning speed.
Then I looked at our actual delivery metrics. Customer-facing feature velocity? Up only 8%. Time from idea to production? Barely budged. We had optimized one part of the system beautifully, but the overall outcome barely moved.
Turns out, we were experiencing a textbook case of what Thoughtworks documented: coding became roughly 30% faster with AI assistants, but delivery improvement was only 8% when you factor in testing, code review, environment waits, and dependency management.
The Math Doesn’t Lie
Here’s the uncomfortable truth: coding represents only about 15-25% of the software development lifecycle. Even if we made it infinitely fast—a 100% improvement—Amdahl’s Law tells us we’d see at most a 15-25% system-wide gain.
The other 75-85% is everything else:
- Code review and approval cycles
- Test suite execution time
- Security scanning and compliance checks
- Deployment pipeline delays
- Cross-team dependency coordination
- Environment provisioning and debugging
We poured AI into the smallest slice of the pie and wondered why we weren’t full.
Bottleneck Migration Is Real
What actually happened on my teams was bottleneck migration. We used to have 10-15 PRs per week hitting review. Now we have 30-40. Our code review capacity didn’t scale with our code generation capacity.
The numbers from Faros AI’s research are stark: teams with high AI adoption merge 98% more pull requests, but PR review time increased 91%. We created a massive pileup at the human approval stage.
And it’s not just volume—AI-generated code often requires more careful review. Veracode found that 45% of AI-generated code introduced OWASP Top 10 vulnerabilities. Another study by CodeRabbit showed AI code contains 2.74x more security vulnerabilities than human-written code.
So we’re reviewing more code, and each review requires more scrutiny. No wonder delivery didn’t speed up.
What Actually Needs to Change
I’m now convinced that capturing the full value of AI coding assistants requires systemic changes, not just better prompts:
-
Automated code review infrastructure: If AI can write code, we need AI-powered static analysis, security scanning, and compliance checking that runs instantly. Manual review should focus on architecture and business logic, not catching bugs AI could flag.
-
Test infrastructure investment: Our test suites weren’t built for 98% more PRs. We need parallel test execution, better test isolation, and faster feedback loops. If tests take 2 hours to run, coding speed is irrelevant.
-
Deployment automation: We still have manual deployment approval gates and environment coordination that takes days. The deployment pipeline needs to scale with code velocity.
-
Dependency and integration process: Cross-team coordination, API contract negotiation, and integration testing are now the long poles. We need better async collaboration tools and integration test automation.
-
Requirements and discovery process: Product and design processes haven’t accelerated. We’re building the wrong things faster, which doesn’t help customers.
The Strategic Question
As a CTO, I’m asking myself: Did we invest in the right productivity improvement?
AI coding assistants are table stakes now—92.6% adoption, 27% of production code AI-generated across the industry. But multiple research studies converge on roughly 10% organizational productivity gains despite the hype.
That’s not nothing, but it’s not the revolution we hoped for. The revolution requires rethinking the entire SDLC, not just the coding phase.
Where are you seeing bottlenecks post-AI? Is it code review? Testing infrastructure? Deployment pipelines? Something else entirely?
And more importantly: What are you actually changing to capture the velocity gains, beyond just adopting better AI tools?