I’ve been thinking a lot about how AI coding assistants are reshaping the way we bring new engineers onto our teams. Over the past year, as we’ve scaled from 25 to 60+ engineers, I’ve watched AI tools transform our onboarding process—and I’m not entirely comfortable with what I’m seeing.
The Speed Is Real
Let me start with the obvious win: new developers are getting productive fast. They use AI assistants to understand our codebase, generate boilerplate code, and get unstuck on syntax issues without waiting for a senior engineer to free up. One of our recent hires shipped their first meaningful feature in week two. A year ago, that would’ve been week four or five.
The efficiency gains are measurable. Our senior engineers spend 40% less time answering “how do I…” questions. GitHub’s research backs this up—they found juniors using AI assistants complete tasks up to 56% faster. That’s not marginal improvement; that’s a fundamental shift in how quickly people can contribute.
But Here’s What Keeps Me Up at Night
Last month, I sat in on a code review with one of our junior engineers who’d been with us for three months. The code worked. It was well-structured. Tests passed. But when I asked why they chose a particular approach, they hesitated. “The AI suggested it,” they said. “It seemed like it would work, so I used it.”
That moment crystallized my concern: we’re optimizing for speed while potentially degrading depth.
Recent research from Anthropic found a 17-point comprehension gap when junior developers learn with AI assistance—50% code understanding versus 67% without AI. That’s statistically significant (Cohen’s d=0.738). We’re not just seeing a small trade-off; we’re potentially creating engineers who can produce code they don’t fully understand.
Mentorship Is About Judgment, Not Just Answers
Here’s the thing about traditional mentorship: it’s inefficient by design. When a junior engineer asks a senior engineer a question, the best mentors don’t just answer—they ask questions back. “What have you tried?” “What do you think the tradeoff is?” “How would this scale?”
That back-and-forth is where judgment develops. That’s where engineers learn to think, not just to do.
AI assistants are brilliant at providing answers. They can explain patterns, suggest approaches, generate implementations. But they don’t teach you why one approach might be better than another in your specific context. They can’t help you develop the instinct that comes from making mistakes and understanding their consequences.
A Real Scenario
We had a junior engineer use an AI assistant to implement a caching layer. The code was textbook perfect—for a high-traffic consumer app. But we’re building an enterprise SaaS product where data freshness matters more than response time. The AI didn’t know our business constraints. The junior engineer didn’t yet have the judgment to question the suggestion.
A senior engineer caught it in review, but that’s precisely my concern: we’re creating engineers who can generate code but need constant oversight because they’re not developing the underlying understanding that allows them to work autonomously.
So What Do We Do?
I don’t think the answer is to ban AI tools. That ship has sailed, and honestly, I don’t want to ban them. The productivity gains are real, and in a competitive hiring market, candidates expect modern tooling.
But I think we need to be much more intentional about how we integrate AI into onboarding:
-
Distinguish between syntax help and judgment development - AI for “how do I format this date” is fine. AI for “how should I architect this feature” needs human oversight.
-
Preserve the struggle - Some problems should be hard. Some mistakes need to be made. Not everything should be solved in 30 seconds.
-
Make mentorship explicit - Regular sessions where we discuss why decisions were made, not just what decisions were made.
-
Measure depth, not just speed - Time-to-first-commit is a vanity metric if those engineers hit a ceiling in year two.
The Question I’m Wrestling With
Are we building engineers who can use AI effectively, or are we building engineers who depend on AI to function?
There’s a version of the future where AI makes engineers better—freeing them from boilerplate so they can focus on judgment, architecture, and deep problem-solving. But there’s also a version where we create a generation of engineers who can ship features quickly but can’t think through complex trade-offs independently.
I don’t have the answer yet. But I know we need to be asking the question.
How are you all thinking about this? What’s working? What are you worried about?