I’ve been leading engineering teams for 18 years, and I just watched something fascinating happen: our velocity metrics spiked 40% in Q1 while our feature delivery timeline… stayed exactly the same.
My team of 40+ engineers at a Fortune 500 financial services company started using AI coding assistants heavily this quarter. GitHub Copilot, Claude Code, Cursor—everyone has their preference. The dashboards looked amazing. Lines of code: up. Pull requests: up. Commits: up. But when product asked “where are my features?” we had no good answer.
The Feedback Loop We Thought We Had
For decades, developer experience boiled down to one core cycle: write code → run tests → deploy. We optimized the hell out of this loop. Faster builds, better CI/CD, hot reloading, instant test feedback. Teams with strong DevEx could iterate at the speed of thought.
The DevEx framework research confirmed what we all felt: feedback loops matter more than any other factor. Short, tight cycles between action and result keep developers in flow state. Remove friction from that loop, and productivity soars.
The AI Promise: Faster Everything
AI tools promised to accelerate this loop dramatically. And in some ways, they delivered:
- Autocomplete that reads your mind
- Function generation from comments
- Test case creation
- Refactoring suggestions
Early studies showed developers completing tasks 20-55% faster. Magic, right?
The Reality: We Introduced New Loops
But here’s what actually happened on my teams. The classic loop didn’t get faster—it got replaced with something else entirely:
Old loop:
Write code → Run tests → Fix bugs → Deploy
New loop:
AI generates code → Manual review → Integration testing → Security scan → Logic verification → Production validation
We didn’t eliminate steps. We added them. Because AI-generated code comes with real risks:
- 23.7% increase in security vulnerabilities
- 75% more logic errors
- Only 33% of developers actually trust the output
So we built new checkpoints. New review stages. New validation loops. Each one necessary, each one adding latency.
The Teams That Thrived vs. The Teams That Struggled
Here’s the pattern I observed across our 6 product teams:
Teams with comprehensive test suites: AI became a force multiplier. The tight write→test→fix loop that AI excels at actually worked. Developers trusted the tests to catch AI mistakes. They moved fast.
Teams with weak test coverage: AI magnified existing problems. Developers spent more time debugging AI-generated code than they saved in typing. The missing feedback loop (automated tests) made AI dangerous instead of helpful.
As one CTO put it: “In well-structured orgs, AI acts as force multiplier; in struggling orgs, it highlights existing flaws.”
AI didn’t create our problems—it revealed which teams already had broken feedback loops.
The Mentoring Dilemma
As someone who mentors first-generation Latino engineers through SHPE, I worry about what this means for learning.
When I learned to code, the feedback loop taught me:
- Write buggy code → Tests fail → Understand why → Fix it → Learn
With AI, the loop becomes:
- Describe what I want → AI writes code → Tests pass → What did I learn?
The feedback loop that built expertise is… gone. We’re optimizing for speed at the expense of understanding.
Junior engineers on my team can ship features faster than ever. But when asked “how does this work?” they struggle to explain code they didn’t write.
The Wrong Optimization?
So here’s my question for this community: Are we optimizing the wrong loops?
We’re measuring:
- Code generation speed
- Autocomplete acceptance rate
- Lines of code per hour
Should we be measuring:
- Time from idea to validated feature in production
- Developer understanding and code ownership
- Quality of feedback at each stage
- Cognitive load across the entire development cycle
The 2026 reality is that traditional velocity metrics like story points have collapsed. If an AI agent can generate 100 story points in an hour, the metric becomes meaningless.
Maybe the real feedback loop we need to optimize is the one from “customer problem identified” to “customer problem solved and validated.” Everything else is just… typing.
What Are You Seeing?
I’d love to hear from other engineering leaders, product folks, and platform builders:
- What feedback loops are you actually optimizing for in 2026?
- How do you measure AI’s impact on the entire development cycle, not just code generation?
- For those building platforms: what infrastructure needs to exist for AI to accelerate feedback loops instead of degrading them?
We invested heavily in CI/CD, observability, and automated testing over the years. I’m starting to think the next investment needs to be in feedback loop infrastructure specifically designed for the AI era.
Because right now, we’re generating code faster but shipping features slower. And that tells me we’re optimizing the wrong thing.