I need to talk about something that’s quietly burning out my engineering team: code review has become unsustainable in the AI era.
The Data Is Alarming
According to Faros AI’s productivity research, teams are generating 98% more pull requests while review time has increased 91%. That’s not just a number on a dashboard—that’s the lived reality for my senior engineers who are drowning in review requests.
At our EdTech startup:
- 6 months ago: ~40 PRs per week for our team of 25 engineers
- Today: 110+ PRs per week with the same team size
- Average review time per PR: Up from 45 minutes to 1 hour 20 minutes
- Senior engineer time spent reviewing: 40-50% of their week
Our most experienced engineers are spending half their time reviewing code instead of building, mentoring, or thinking strategically. And the quality of reviews is suffering because everyone’s exhausted.
Why It’s Getting Worse
AI coding assistants help developers write code faster. That’s the promise, and it’s real. But they also:
- Generate more code per feature. AI tends to be verbose, creating more files and more lines to review.
- Require deeper scrutiny. We can’t trust AI-generated code the same way we trust code from a senior engineer we’ve worked with for years. Every assumption needs validation.
- Make subtle mistakes. AI doesn’t make obvious typos. It makes architectural mistakes that look plausible but have hidden risks.
- Create review fatigue. When you’re reviewing your 15th AI-generated API endpoint of the week, your attention starts to slip.
The Process That’s Breaking
Our traditional code review process:
- Every PR requires at least 2 approvals
- Senior engineers review architectural changes
- Security-sensitive code gets specialized review
- All comments must be addressed before merge
This worked when we had 40 PRs per week. At 110+ PRs per week, it’s a bottleneck that’s slowing everything down and creating friction between teams.
Product is frustrated that features take longer despite “faster coding.” Engineering is frustrated by the overwhelming review burden. And I’m concerned about what we’re missing because reviewers are overwhelmed.
What We’ve Tried (With Mixed Results)
AI-assisted code review: We’re testing tools that use AI to pre-review code and flag potential issues. It helps, but we still need human judgment for architectural decisions and context-specific concerns. And honestly? Trusting AI to review AI-generated code feels like a hall of mirrors.
Tiered review process: Critical paths get deep review, routine changes get lighter review. The challenge is deciding what’s “critical” and ensuring junior developers understand the distinction.
Protected review time: Blocked calendar time for reviews so they’re not squeezed between meetings. Works on paper, doesn’t always work in practice when urgent PRs pile up.
Smaller PRs: We’re pushing for smaller, more focused PRs. But AI makes it so easy to generate a “complete” feature that developers resist breaking it up.
The Question That Keeps Me Up
How do you maintain quality code review at AI-accelerated PR volume without burning out your team?
Are you:
- Using AI review tools effectively? Which ones actually work?
- Changing your review standards or approval requirements?
- Investing in automation to reduce what humans need to review?
- Accepting that some things will slip through and investing in observability instead?
- Finding ways to make reviewing more sustainable for senior engineers?
Because right now, we’re heading toward a crisis where our best engineers either spend all their time reviewing or we compromise on quality. Neither option is acceptable.
I’d love to hear what’s actually working for other teams facing this challenge.