I had a 1:1 with one of my senior engineers last week. Let’s call him Marcus. He’s been with the company for 6 years, knows our codebase inside and out, and is one of our best technical leads.
He told me he’s burned out. Not from writing code. Not from architecture work. From reviewing pull requests.
Marcus said he spent 4 out of 5 days last week doing nothing but code review. He opened his laptop in the morning to 23 PRs waiting for his approval. By end of day, he’d reviewed 15 of them, but 12 new ones had come in. He’s drowning, and he’s not alone.
The 91% Problem
According to CircleCI’s 2026 data, PR review time has increased 91% for teams with high AI adoption. Let me say that again: nearly double the time spent reviewing code.
Why? Because AI democratized code writing, but review expertise is still scarce.
Anyone can generate a working authentication module with Claude or ChatGPT. But how many people can spot the subtle security vulnerability in AI-generated OAuth flow? Or recognize that the code works but violates our architectural patterns?
Our senior engineers—the people with institutional knowledge, architectural vision, and security awareness—have become the bottleneck.
The Team Impact
Here’s what I’m seeing across the team:
Senior engineers feel like gatekeepers rather than builders. Marcus told me he hasn’t written meaningful code in two weeks. He’s drowning in review requests, and it’s killing his morale.
Junior developers are generating lots of code but learning less. When reviews are async and rushed, the teaching moments disappear. A quick “change this” comment doesn’t provide the context that a 10-minute pairing session would.
Mid-level engineers are stuck in the middle. They can review some PRs but not others. They’re trying to grow their skills while also clearing the backlog. It’s a no-win situation.
The Trade-offs We’re Facing
I’m seeing three competing priorities:
- Speed of coding (AI is making this faster)
- Quality of review (this is getting slower and more stressful)
- Knowledge transfer (this is disappearing in async, high-volume reviews)
We can’t optimize for all three simultaneously. Something has to give.
Possible Solutions (But I’m Not Sure Which One Is Right)
Option 1: Mandatory pair programming for AI-generated code. If AI writes it, a human pair reviews it in real-time. Pro: Better knowledge transfer. Con: Slower overall velocity.
Option 2: Automated review tools. Use AI to review AI (meta, I know). GitHub Copilot for code review? Pro: Scales review capacity. Con: Who reviews the reviewer?
Option 3: Better AI prompt training. Teach developers to prompt AI for code that matches our patterns and doesn’t need heavy review. Pro: Reduces review burden. Con: Requires significant training investment.
Option 4: Rotate review responsibility. Spread the load across more engineers, even if they’re not perfect reviewers. Pro: Develops more reviewers. Con: Might miss critical issues.
Option 5: Reduce AI-generated code volume. Only use AI for specific types of work where review is straightforward. Pro: Sustainable review load. Con: Loses productivity gains.
None of these feel like perfect solutions.
The Question I’m Wrestling With
How do we scale code review capacity when AI has scaled code generation capacity by 2-3x?
In financial services, we can’t skip review. Compliance requires human sign-off. Security requires expert eyes. But we also can’t burn out our senior engineers.
I’m especially interested in hearing from:
- Engineering leaders who’ve successfully scaled review processes
- Senior engineers who’ve found sustainable review workflows
- Teams that have tried automated review tools—did they actually help?
What’s working in your organizations? Because right now, Marcus isn’t the only one feeling the weight of this bottleneck. And if we don’t solve it, we’re going to lose our best people.