I need to share something that’s been bothering me for months. When my company rolled out GitHub Copilot last summer, I was genuinely excited. I was one of the early adopters on the team, and the initial productivity boost felt amazing. I could implement features in hours that would have taken days before.
Fast forward to today: I estimate I spend about 60% of my time reviewing pull requests, 25% writing my own code, and 15% in meetings. I’ve essentially become a full-time code reviewer who occasionally writes code on the side.
This isn’t what I signed up for when I became a senior engineer.
How We Got Here
The pattern started subtly. Our junior developers adopted AI tools enthusiastically. Suddenly, they were writing complex features that would have been beyond their skill level six months ago. Database migrations with intricate rollback logic. Sophisticated caching strategies. Complex async workflows.
The code often looks impressive. Proper error handling, good variable names, following patterns correctly. But here’s the problem: they frequently don’t deeply understand what the AI generated.
I started noticing this in code reviews when I’d ask “why did you choose this approach?” and the answer was some variation of “Copilot suggested it and it seemed to work.” Not “I evaluated three approaches and chose this one because…” Just “the AI suggested it.”
The Review Burden
Reviewing AI-generated code is fundamentally different from reviewing human-written code. When a developer writes code from scratch, there’s usually a thought process you can follow. You can see the logic progression. The commit history tells a story.
AI-generated code appears in larger chunks. The progression isn’t always logical - it’s pattern-matched from training data. Sometimes the approach is clever in ways that are actually counterproductive for our codebase. Other times it’s unnecessarily complex because the AI optimized for generality rather than our specific use case.
This means each review takes longer. I can’t trust my heuristics. I have to:
- Carefully verify the logic is actually correct (not just looks correct)
- Check if this approach fits our architecture
- Consider if the author can maintain this code
- Evaluate if a simpler approach would be better
- Often explain why the AI’s approach, while functional, isn’t ideal for us
The Mentorship Crisis
Here’s what really concerns me: I used to spend significant time mentoring junior developers. Pair programming sessions, architecture discussions, explaining trade-offs. That’s how I learned as a junior - from seniors investing time in my growth.
Now there’s no time. The review queue is 30-40 PRs deep constantly. When I do review, it’s often async comments rather than synchronous mentorship. Junior developers are getting feedback like “this logic is incorrect” without the deeper context of “here’s how to think about this type of problem.”
I worry we’re creating a generation of developers who can prompt AI to generate code but can’t architect systems, make trade-off decisions, or deeply understand what they’re building.
The Personal Cost
I’m tired. Code review is cognitively draining, especially when you’re trying to deeply understand AI-generated logic. By the end of the day, I’m exhausted from reviewing rather than energized from building.
Worse, I feel like my skills are atrophying. I’m not designing systems anymore - I’m auditing AI output. I’m not solving interesting problems - I’m verifying AI solutions. My GitHub contribution graph is mostly review comments, not commits.
Several other senior engineers on my team have expressed similar frustration. One told me bluntly: “I’m being paid a senior engineer salary to be a code inspector. That’s not what I want to do with my career.”
Is This Sustainable?
I keep asking myself: is this sustainable? Can senior engineers continue to be the bottleneck that validates all AI-generated code? What happens when we burn out or leave for roles where we actually get to engineer rather than just review?
And from a company perspective: are we actually better off? We’re generating code faster, but review is slower, bug fixes take longer, and senior engineers are less satisfied. The total cycle time from feature idea to production hasn’t meaningfully improved.
Looking for Perspectives
For other senior engineers experiencing this:
- How are you managing the review load without burning out?
- Have you found ways to make reviewing AI code less draining?
- How do you balance review responsibilities with your own technical growth?
- What are you doing to ensure junior developers are actually learning, not just prompting?
I love the craft of software engineering. I love solving hard problems and building systems. I don’t love spending 60% of my day verifying AI output. Something needs to change, but I’m not sure what.