I just finished reviewing metrics from our Q1 onboarding cohort, and I need to share something that has been keeping me up at night. ![]()
The Good News (That Turned Into Bad News)
Our three junior engineers—fresh bootcamp grads who started in January—are completing their assigned tasks 45% faster than our 2024 cohort. Sounds amazing, right? They are all using Cursor, GitHub Copilot, and ChatGPT heavily, and on paper, their velocity is incredible.
Here is what initially impressed me:
- Feature delivery time Down from 2 weeks to approximately 1 week per small feature
- Bug fix turnaround Improved from 3 days to 1.5 days
- PR creation rate Up 60% compared to last year juniors
I was ready to write a case study about how AI was democratizing coding and accelerating junior developer productivity. Then I looked at the other side of the equation.
The Problem Nobody Talks About
Code review time has absolutely exploded.
Our senior engineers are now spending an average of 91% more time reviewing junior PRs compared to last year. What used to be a 15-minute review is now regularly taking 45-60 minutes. Some reviews that should be straightforward are going three rounds instead of one.
Here is what I am seeing in these reviews:
1. Volume without understanding
Juniors are shipping working code fast, but when you ask why did you structure it this way in the PR comments, the answers are vague. The AI suggested it or it passed the tests are not acceptable explanations, but that is what we are getting.
2. Copy-paste architecture
I reviewed a PR last week where a junior implemented a complex state management pattern that was absolutely unnecessary for the feature scope. When I asked about it, they admitted they did not fully understand it—Copilot suggested it, and it worked, so they went with it. The code worked, but it introduced three new dependencies and made the codebase more complex for zero benefit.
3. The missing why
PRs are coming in with working implementations but zero context about trade-offs considered or alternative approaches evaluated. It feels like they are typing requirements into a prompt and submitting whatever comes out without critical thinking about whether it is the right solution.
The Senior Engineer Bottleneck
Here is the brutal reality: our three most experienced engineers (who should be architecting our new platform features) are now spending 40% of their time doing deep code review remediation instead of 20% doing normal code review.
They are essentially re-teaching computer science fundamentals in PR comments:
- Why does this need a hash map instead of an array?
- What is the time complexity of this nested loop?
- Why are you making 47 API calls in a loop instead of batching?
The AI helps juniors write syntactically correct code, but it is not teaching them how to think about problems, evaluate trade-offs, or understand system constraints.
The Uncomfortable Question
If we are spending 91% more senior engineering time on reviews, have we actually gained any efficiency?
Quick math:
- We save approximately 20 hours per junior per month on task completion
- We spend approximately 35 additional hours per senior per month on extended reviews
- Net result: We are down approximately 15 hours per junior senior pair, not up
Plus, the knowledge transfer is not happening organically. Last year juniors would struggle more initially but would come back with questions, learn from mistakes, and gradually need less oversight. This year juniors are completing tasks faster but learning slower.
What I Am Trying Next
I do not have answers yet, but here is what we are experimenting with:
1. AI-free Fridays
Every Friday, juniors tackle one feature without AI assistance. The velocity drops, but I want to see if it improves their fundamental problem-solving skills.
2. Explain-first PR requirement
Before code review, juniors must write a 200-word explanation of their approach and why they chose it. If they cannot explain it without referencing the AI suggested, the PR gets sent back before code review even starts.
3. Paired AI sessions
Instead of juniors using AI solo, we are testing pairing sessions where they work with AI alongside a senior for the first 30 minutes of a task. The senior can catch misconceptions early and guide the AI interaction.
Am I Wrong?
Maybe I am being too critical. Maybe this is just the new learning curve, and in six months these juniors will have internalized the patterns and will outperform previous cohorts.
But right now, it feels like we have optimized for shipping speed at the expense of learning depth. The juniors are productive but brittle—they can execute well-defined tasks quickly, but they struggle when faced with ambiguous requirements or system-level thinking.
Has anyone else seen this pattern? How are you balancing AI-assisted velocity with actual skill development?
I am genuinely curious if this is a short-term adjustment period or if we are training a generation of engineers who can operate AI tools but cannot architect systems without them.
Would love to hear from other engineering leaders dealing with this—especially if you have found ways to get the productivity benefits without sacrificing the learning fundamentals.