AI Made My Junior Devs Faster—But Are They Actually Learning?

Last month, one of my junior engineers shipped a complex feature integration 45% faster than I expected. When I asked her to walk me through the implementation during code review, she hesitated. “Honestly, Luis, Copilot wrote most of it. I understand what it does, but I’m not sure I could’ve written it from scratch.”

That moment crystallized something I’ve been worried about for months.

The Productivity Paradox

Our team’s output metrics look incredible. Junior developers are closing tickets faster than ever. GitHub says developers complete tasks 56% faster with AI assistants, and I believe it—I see it daily. But I’m starting to wonder if we’re measuring the wrong thing.

Here’s what I’m seeing that the productivity dashboards don’t capture:

The “offline test”: Last week, our office WiFi went down for 3 hours. My senior engineers barely noticed. My juniors? Their velocity dropped to near zero. They’d built workflows entirely dependent on AI autocomplete.

The debugging gap: When a junior’s AI-generated code breaks in production, the troubleshooting often gets escalated. They can read the code, but they struggle to trace the logic because they didn’t write it iteratively, with mistakes and corrections along the way.

Lost learning moments: I learned to code by writing terrible code, getting feedback, and rewriting it. That cycle built intuition. Now juniors get “perfect” code on the first try—but they miss the learning embedded in iteration.

What We’re Trying

I’m not anti-AI. I use Copilot myself. But I’m realizing we need to adapt our mentorship practices:

  1. Mandatory pair programming hours: Juniors pair with seniors for at least 8 hours/week, with AI turned off. Controversial, but necessary.

  2. “Explain the AI” reviews: In code reviews, I ask juniors to explain not just what the AI-generated code does, but why it works and what alternatives exist.

  3. Deliberate practice sessions: Weekly exercises where juniors solve problems without AI, then compare with AI solutions. It’s like a musician practicing scales.

  4. AI literacy training: Teaching juniors to critically evaluate AI suggestions, not just accept them. What are the edge cases? Security implications? Performance tradeoffs?

The Question That Keeps Me Up

Are we trading long-term capability for short-term velocity?

I read that employment for developers aged 22-25 fell nearly 20% from 2022 to 2025, right as AI coding tools exploded. Are we creating a generation that can ship code but can’t think through problems? What happens when these juniors need to become seniors?

How are other engineering leaders handling this? What’s working? What have you tried that failed?

I don’t have answers, just concerns and experiments. Would love to hear if others are wrestling with this too.

Luis, this resonates deeply. I’m seeing the exact same pattern across our 80-engineer organization, and I’ve come to believe this is fundamentally a leadership challenge, not a tools problem.

We’re Solving for the Wrong Variable

The issue isn’t AI coding assistants—they’re here to stay and genuinely boost productivity. The real issue is that we’re trying to apply 2015 mentorship practices to a 2026 reality. It’s like teaching someone to drive while they’re already in a self-driving car.

Last quarter, I pulled our retention and skill development data, and the numbers were alarming:

  • Juniors hired in 2024-2025 (AI era) have 34% lower code review approval rates on first submission compared to 2021-2023 cohorts
  • Time-to-senior-promotion has increased from 3.2 years to 4.8 years
  • Most concerning: when we ran internal “offline” coding assessments, recent hires scored 40% lower than pre-AI cohorts at the same tenure

But here’s what changed when we adapted:

What We’ve Implemented

1. Mandatory paired programming with explicit learning objectives
Not just “pair with a senior” but structured sessions with clear goals: “Today you’ll learn error handling patterns” or “Today we debug without AI.” We track these like any other OKR.

2. AI-assisted code review training
We teach seniors how to review AI-generated code specifically. Questions like: “Did you consider alternatives?” “What happens if this API is down?” “Why did you accept this suggestion?” These become standard review prompts.

3. “Show your work” documentation
In PRs, juniors must include a brief note: What problem were you solving? What did AI suggest? What did you modify and why? This forces reflection.

4. Reverse mentoring on AI literacy
Juniors teach seniors the latest AI techniques, but seniors teach juniors when NOT to use AI. It’s bidirectional.

The Results (So Far)

After 6 months:

  • Code review cycles decreased by 18% (better first submissions)
  • Junior satisfaction scores up 23% (they feel less like imposters)
  • Senior engineers report this is actually less time-intensive than firefighting production issues from poorly understood code

The Hard Truth

Here’s what I told our board when they questioned the investment: We can have fast juniors now or capable seniors later. Pick one.

The choice isn’t really binary, but the framing works. Every hour we don’t invest in adapted mentorship is technical debt—except instead of brittle code, we’re accumulating brittle engineers.

Luis, your “explain the AI” reviews and deliberate practice sessions are exactly right. The key is making this systematic, not optional. It has to be part of how we define “done” for onboarding and development.

Are other leaders tracking these metrics? I’d love to compare notes on what actually moves the needle.

This thread hit me hard because we’re seeing the EXACT same thing in design, and I think it points to a broader issue about how we work with AI tools across disciplines.

The Figma AI Plugin Problem

Last month, I had a junior designer present a component system for review. The designs were gorgeous—perfect spacing, consistent tokens, thoughtful variants. When I asked about the design decisions, she said: “The Figma AI plugin suggested this structure, and it looked good, so I went with it.”

When I pushed—“But why 8px spacing instead of 4px or 12px? Why these specific breakpoints?”—she couldn’t answer. She’d never had to think through the underlying system because AI gave her something that “worked.”

Sound familiar?

We’re Teaching Tool Use, Not Thinking

Here’s what I’m realizing: AI is incredible at generating outputs, but terrible at teaching the “why” behind them.

A junior engineer using Copilot gets working code but misses the iterative problem-solving that builds intuition. A junior designer using AI gets beautiful components but misses the systems thinking that makes them scalable.

My failed startup taught me more about product-market fit than any successful project ever could. The struggle IS the lesson. When AI removes the struggle, where does learning happen?

What I’m Trying (Inspired by Luis’s Approach)

  1. “Justify the AI” design reviews: Before accepting any AI-generated design, juniors must document: What problem does this solve? What alternatives exist? What are the tradeoffs?

  2. Manual Mondays: One day a week, no AI plugins. Build a component from scratch. It’s slower, but the learning compounds.

  3. Teach AI as a junior collaborator: I tell the team: “AI is like a talented junior designer. Great at execution, needs direction and review. Your job is to be the senior who guides it.”

  4. Focus on decision frameworks: Instead of teaching specific tools, teach how to evaluate ANY solution—AI-generated or human-made.

The Question I’m Wrestling With

How do we teach critical thinking about AI suggestions when juniors don’t yet have the expertise to judge what’s good vs. what’s “good enough”?

In design, I can spot when an AI-generated solution looks polished but won’t scale. But juniors can’t—they don’t have the pattern recognition yet. How do you build pattern recognition when AI prevents you from seeing enough patterns?

Keisha’s “show your work” documentation is brilliant. Makes the thinking visible, not just the output. I’m stealing that for design PRs.

Are folks in product, data, or other functions seeing this too? Feels like this isn’t just a code/design problem—it’s an “AI-assisted anything” problem.

Luis, Keisha, Maya—everything you’re describing aligns with what I’m seeing at the executive level, but I want to add a strategic lens that might help frame this for leadership teams.

This isn’t just a mentorship challenge. It’s a systemic risk that impacts hiring strategy, technical debt, and the entire senior developer pipeline.

The Three Hidden Costs

1. Skill Gaps Emerge in Crisis

We had a production outage last month—critical API failure. Our senior engineers were in meetings. The juniors on-call couldn’t diagnose it. Not because they weren’t smart, but because they’d never had to debug complex distributed systems failures without AI assistance.

The outage lasted 3 hours longer than it should have. The post-mortem revealed a troubling pattern: juniors could write features with AI, but they couldn’t troubleshoot systems they didn’t deeply understand.

2. AI-Induced Technical Debt

GitClear research shows AI coding assistants can create “AI-induced tech debt”—code that works but doesn’t align with system architecture, introduces subtle bugs, or creates maintenance nightmares. In our codebase analysis:

  • 46% of developers said they don’t fully trust AI results
  • Only 3% “highly trust” AI-generated code
  • Yet 84% use these tools regularly

We’re shipping code we don’t trust, written by engineers who don’t fully understand it. That’s a recipe for compounding technical debt.

3. The Senior Pipeline Breaks

Here’s the math that keeps me up at night: If juniors take 4.8 years to reach senior (up from 3.2 years pre-AI, per Keisha’s data), and we’re hiring fewer entry-level engineers (20% drop in 22-25 year-old employment), where do our future technical leaders come from?

We can’t hire our way out of a broken development pipeline.

What Systemic Change Looks Like

I agree with everything Luis and Keisha are implementing tactically. But this requires organizational commitment:

1. Revised Job Architecture
We’ve updated our engineering levels to include “AI literacy” as a core competency—not just using AI, but knowing when NOT to use it, how to validate outputs, and how to teach others.

2. Explicit Learning Objectives in Onboarding
New hires have both delivery goals AND learning goals. “Ship feature X” is paired with “Demonstrate you can debug feature X without AI assistance.”

3. Senior Engineer Development Time
We budget 20% of senior engineer time for mentorship, treated as billable work. This isn’t “extra”—it’s core to their role. Managers who don’t protect this time get flagged in performance reviews.

4. Engineering Review of AI Tools
We audit AI-generated code quarterly. What patterns emerge? What mistakes repeat? This feeds back into training and tool configuration.

The Question That Should Terrify Leadership

Are we creating a generation of developers who can ship code but can’t build systems?

When I present this to our board, I frame it simply: Every junior who becomes dependent on AI without developing fundamentals is future technical debt we’re incurring today.

They understand technical debt. This reframes mentorship investment as risk mitigation, not “nice to have.”

Maya’s question about building pattern recognition is exactly right. My answer: you have to deliberately create struggle. Keisha’s “offline assessments” and Luis’s “AI turned off” pair programming are how we force that struggle in controlled environments.

The alternative is learning through production outages, which is far more expensive.

Coming at this from the product side, and I think there’s a critical business dimension missing from this discussion: faster shipping doesn’t automatically mean better outcomes.

The Feature Factory Trap

Last quarter, one of our engineering teams shipped a new onboarding flow 40% faster than estimated, heavily assisted by AI. The metrics looked great internally—velocity up, sprint completion up, deploy frequency up.

Then we launched it.

The customer feedback was brutal. The flow was technically functional but poorly designed for actual user needs. It required three iterations to get right, ultimately taking longer than if we’d built it thoughtfully the first time.

Post-mortem insight: The engineer focused on implementing what AI suggested, not on deeply understanding the user problem. Speed masked lack of understanding.

Speed vs. Sustainability

Michelle’s framing about technical debt is exactly right, but I’d extend it: we’re also creating product debt.

When engineers don’t deeply understand the systems they’re building, they can’t effectively:

  • Push back on bad product requirements
  • Suggest simpler technical solutions to product problems
  • Identify edge cases that PMs miss
  • Contribute to product strategy discussions

The best product teams I’ve worked with have engineers who challenge assumptions and improve the product roadmap. AI-dependent engineers who lack deep understanding become order-takers, not collaborators.

The Business Case for Learning Velocity

Here’s how I’ve been framing this for our leadership team:

We shouldn’t just measure code velocity. We should measure learning velocity:

  • How quickly can engineers debug novel problems?
  • How effectively do they contribute to architectural discussions?
  • How well do they mentor others?
  • How often do their solutions require rework?

These are leading indicators of long-term team capability. Code output is a lagging indicator that can mask declining fundamentals.

Answering Luis’s Question: The ROI of Mentorship

Luis asked how to balance speed with learning. Here’s the product argument I’ve used:

Short-term (0-6 months): AI-heavy teams ship 30-45% faster
Medium-term (6-18 months): Teams with AI literacy and fundamentals ship 15-20% faster with 40% less rework
Long-term (18+ months): Teams with strong fundamentals innovate 2-3x more effectively because they understand systems deeply enough to reimagine them

The key insight: initial velocity gains from AI erode as complexity increases and foundational understanding matters more.

I’ve pitched this to our CFO as: “We can optimize for quarterly shipping velocity or annual innovation capacity. Which creates more shareholder value?”

When framed as strategic investment vs. tactical gains, the answer is obvious.

The Cross-Functional Pattern

Maya’s observation about this being an “AI-assisted anything” problem is spot-on. I’m seeing the same pattern in product management:

Junior PMs using AI to write PRDs without understanding customer problems. Beautiful documents, poor product decisions. Same root cause—tools that accelerate output without building understanding.

The solution in every discipline seems to be: make the thinking visible, not just the output.

Keisha’s “show your work” documentation, Luis’s “explain the AI” reviews, Maya’s “justify the AI” design reviews—these all force reflection and build pattern recognition.

Question for the engineering leaders here: How do you measure whether juniors are actually developing fundamentals vs. just maintaining AI-assisted productivity?

What metrics or signals tell you a junior is ready to become a senior, in the AI era?