We're Losing Junior Engineers to AI Tools - But Not How You Think

I’ve been thinking a lot about something that happened last month that’s kept me up at night.

One of our junior engineers - bright, enthusiastic, shipped features fast - got promoted to mid-level. Two weeks later, during an architecture review, I asked them to explain the design decisions behind a service they’d built. Silence. They could walk through the code line by line, but couldn’t articulate why they’d chosen that approach over alternatives.

Turns out, they’d been using AI code generation tools for nearly everything. The code worked. It passed reviews. But they’d never really understood the underlying principles.

The Pattern I’m Seeing

As we’ve scaled from 25 to 80+ engineers over the past year, I’m noticing three distinct types of engineers emerging:

The AI-Dependent: Struggle without the tools. Can implement solutions but can’t explain trade-offs or debug novel problems. Fast initially, but plateau quickly.

The AI-Resistant: Refuse to use the tools on principle. Often produce better-understood code, but slower velocity. Sometimes reinventing wheels that AI could handle.

The AI-Augmented: Use AI as a force multiplier while maintaining deep understanding. Can explain every line, including AI-generated code. These are the ones thriving.

The Paradox

Here’s what’s keeping me up: Tools designed to accelerate learning might actually be creating skill gaps. We hired these juniors expecting them to grow into strong mids within 18-24 months. But if they’re using AI as a crutch instead of a tool, are we building a team that can’t function without it?

What We’re Trying

I don’t have all the answers (honestly, I’m figuring this out as we go), but here’s what we’ve implemented:

  1. Mandatory “Explain Your Thinking” sessions: Every time a junior submits AI-generated code for review, they have to walk through their decision process. Not “what does this code do” but “why this approach?”

  2. Paired learning tracks: Junior + mentor review AI-generated code together. The mentor’s job isn’t to validate the code - it’s to ensure the junior can defend it.

  3. AI-free weeks: Once a quarter, juniors build a feature without AI tools. Forces them to struggle, search docs, really learn.

Research backs this up - organizations that integrate learning into the flow of work see 35% better outcomes than those treating it as separate training. But I’m still worried we’re not moving fast enough.

The Question I’m Wrestling With

Am I being too harsh? Maybe this is just the new normal - like how we all learned to Google instead of memorizing everything. Maybe “understanding” will mean something different for this generation.

But I keep thinking about what happens when these engineers hit a truly novel problem. When the AI doesn’t have a pattern to match. When they need to innovate, not just implement.

How are other engineering leaders thinking about this? Are you seeing similar patterns? What’s working to balance AI acceleration with genuine skill development?

I’d especially love to hear from folks who’ve cracked this code (pun intended). And from engineers at all levels - juniors, how does this land with you? Seniors, are we overreacting?

Being vulnerable here: This is one of those problems where I feel like I should have better answers as a VP, but honestly, we’re all figuring out this AI-augmented world together.

Keisha, this resonates deeply. We’re seeing the exact same pattern across our 40+ engineering team at the financial services company.

What really caught my attention was your three categories - I’ve mentally been organizing our engineers the same way but didn’t have the language for it. The “AI-Augmented” group you describe? Those are the engineers we’re now explicitly looking for in interviews and trying to develop internally.

The Compliance Dimension

Here’s where it gets even more complex for us: In financial services, we can’t just rely on “the code works.” When regulators ask why we implemented a feature a certain way, or when we need to audit a decision that affected customer accounts, we need engineers who can articulate their reasoning.

AI-generated explanations don’t cut it. We’ve literally had situations where an engineer said “the AI suggested this approach” in a design review, and our compliance team shut it down immediately. They need to understand the human reasoning process.

What We’re Implementing

We’ve adopted something similar to your paired review approach, but with a twist:

Paired Junior-Senior Code Reviews Specifically for AI-Generated Code: The senior doesn’t review the code for correctness - our automated tests do that. They review for understanding. Can the junior explain trade-offs? Can they defend it under regulatory scrutiny?

Time Investment Analysis: Here’s the hard part - these reviews take 2x longer than traditional code reviews. We’re measuring whether that upfront time investment reduces downstream incidents and knowledge gaps. Early data (6 months in) suggests yes, but leadership keeps pushing back on the “velocity hit.”

The SHPE Connection

Through my mentorship work with SHPE (Society of Hispanic Professional Engineers), I’ve been talking with other Latino engineering leaders facing similar challenges. One insight that’s emerged: Engineers from non-traditional backgrounds sometimes over-rely on AI because they feel they’re “catching up” to CS degree holders.

We’re being intentional about how we frame AI tools - not as a replacement for learning, but as a tool that requires deep understanding to use effectively. The mentorship relationship becomes critical here.

My Open Question

How do you balance the short-term velocity gains from AI tools against the long-term capability building? Our leadership loves the faster feature delivery, but I’m worried we’re building technical debt in the form of engineers who can’t operate without their AI crutch.

The challenge feels especially acute in high-compliance environments. We can’t afford engineers who can’t explain their work.

Anyone else navigating this in regulated industries? How are you measuring whether your training interventions are actually working?

This discussion is fascinating, but I need to push back on some assumptions here. As someone who looks at data all day, I’m seeing a lot of pattern matching but not enough rigorous measurement.

The Measurement Problem

Keisha, you’ve identified three categories of engineers: AI-Dependent, AI-Resistant, and AI-Augmented. But how are we actually measuring “dependency” versus “skill development”?

Are we tracking:

  • Code quality metrics before and after AI adoption?
  • Bug rates by engineer cohort?
  • Time-to-resolution for novel problems?
  • Knowledge retention assessments?
  • Performance in scenarios with vs. without AI tools?

Without baseline measurements, we’re potentially conflating correlation with causation. Maybe the engineers who struggle to explain their code would have struggled regardless of AI tools - they just would have copied from Stack Overflow instead.

A/B Testing Framework for Training

At Anthropic, we’ve been running experiments on different approaches to AI tool adoption and training. Here’s what we’re measuring:

Control Variables:

  • Prior experience level (years, previous roles)
  • Mentorship hours allocated
  • Project complexity scores
  • Code review thoroughness

Outcome Metrics:

  • Code review cycles per feature
  • Production incident attribution
  • Promotion readiness assessments
  • Peer knowledge-sharing participation

Early Findings (6 months, N=43 engineers):

  • Engineers with structured AI training (how/when to use tools) showed 28% fewer review cycles than those without
  • Bug attribution rates were NOT significantly different between AI-using and non-AI-using cohorts
  • Time-to-understanding (measured via design doc quality) actually improved for AI-augmented engineers when they had paired mentorship

The Confounding Variable

Luis, your compliance example is interesting, but it raises a question: Is the problem AI tools specifically, or is it that we’ve always had engineers who can’t explain their reasoning?

Before AI, we had:

  • Engineers who copied Stack Overflow without understanding
  • Engineers who cargo-culted patterns from senior engineers
  • Engineers who memorized solutions without grasping principles

AI might be making an existing problem more visible, not creating a new one.

What I’d Propose

If you want to really understand what’s happening:

  1. Baseline Assessment: Test your engineers’ ability to explain design decisions on code they wrote BEFORE your AI training interventions
  2. Cohort Design: Split new hires into experimental groups with different training approaches
  3. Longitudinal Tracking: Measure the same engineers over 12-18 months as they progress
  4. Control for Confounds: Account for mentorship quality, project types, team dynamics

I’m happy to share our experiment design template - we’ve been iterating on it for the past year. The key is measuring outcomes, not just observing patterns.

Question for the group: What outcome metrics would actually convince you that an approach is working?

Not trying to be overly academic here - I genuinely think we need better data to make these decisions. The stakes are too high to rely on intuition alone.

Okay, this hits close to home as someone who’s on the “junior” side of this equation (though 7 years in, I’m somewhere in the middle now).

I want to share something I haven’t really talked about publicly, but this feels like a safe space for it.

My Honest Experience

I started my career before AI coding tools were really a thing. Learned React by reading docs, building side projects, breaking things, and debugging for hours. When GitHub Copilot and similar tools came out, I was already comfortable with the fundamentals.

But here’s what I’ve noticed in myself: When I’m tired, stressed, or working on something I’m less confident about, I lean on AI tools way more than I probably should. And yeah, sometimes I accept suggestions without fully thinking through the implications.

A few months ago, a senior engineer on my team asked me during code review: “Why did you structure the state management this way?” And I froze. Because honestly? The AI suggested it, it looked reasonable, tests passed, and I moved on.

That moment was humbling. I realized I’d been using the tool as a crutch in ways I didn’t even fully recognize.

What Actually Helps Me Learn

The best learning experiences I’ve had recently aren’t in formal training sessions. They’re when someone asks “why did you choose this approach?” and then actually listens to my reasoning.

Sometimes I can defend my decision and realize I understand it better than I thought. Sometimes I can’t, and that’s when the real learning happens - working through the reasoning with someone more experienced.

What I wish existed:

  • More clarity on WHEN to use AI tools vs when to struggle through manually
  • Explicit guidance: “Use AI for boilerplate, but design the architecture yourself first”
  • Regular “explain your work” sessions that feel like learning, not like being tested
  • Pairing sessions specifically focused on understanding AI-generated code

The Generational Divide?

I wonder if there’s a split between engineers who learned fundamentals before AI tools and those who started with them. I had the foundation first, so AI augments my work. But newer engineers might be building on AI-generated code from day one.

Not sure which approach will prove better in the long run. Maybe “understanding” will mean something different - less about memorizing syntax, more about architectural thinking and problem decomposition?

To the leaders in this thread: Please keep asking us “why” questions. It’s uncomfortable sometimes, but that discomfort is where the learning happens. And please be patient when we can’t answer immediately - we might need time to actually think through what we built.

Thanks for starting this conversation, Keisha. It’s making me more intentional about how I use these tools.

Coming at this from the design side, and wow - we’re seeing the exact same pattern with AI design tools.

The Design Parallel

I manage our design system, and we recently hired a designer who’s incredible at generating beautiful components in Figma using AI plugins. They can produce high-quality UI fast. But when I asked them to explain the design system architecture decisions - why we use these tokens, how the component hierarchy supports our product strategy, when to break the rules - they couldn’t articulate it.

They could use the design system. They couldn’t think in design systems.

Sound familiar?

My Failed Startup Story

Here’s why this resonates so deeply: I co-founded a startup that failed spectacularly. One of the reasons? We moved too fast without understanding the fundamentals of our problem space. We shipped features quickly, but they didn’t solve the right problems because we hadn’t done the deep thinking.

AI tools are creating the same trap - you can ship fast without understanding deeply. And in the short term, it looks like success. In the long term, you’ve built on quicksand.

Cross-Functional Insight

Alex’s comment about the generational divide is spot-on. But I think it’s not just engineering vs. design. It’s about whether you learned to think in your discipline before you had tools that could do some of that thinking for you.

I learned design by hand-coding HTML/CSS, by studying why certain interfaces worked, by failing and iterating. When Figma AI tools came out, they augmented that foundation. But designers who start with these tools? They might be learning patterns without understanding principles.

What We’re Trying

“Explain Your Design” Sessions: Just like Keisha’s code reviews, we do design critiques where the designer has to defend not just what they designed, but why. Why this interaction pattern? What problem does this solve? What did you consider and reject?

Cross-Functional Learning: Engineering and design facing the same challenge is actually an opportunity. Maybe we need shared “explain your work” sessions where engineers and designers both practice articulating their reasoning?

Show Your Broken Things: We do monthly showcases where people share what didn’t work and what they learned. It normalizes struggle and learning, counteracting the “AI made this perfect in 5 minutes” culture.

The Question I’m Wrestling With

Is this a tools problem or an onboarding problem?

Like Rachel pointed out, we had pattern-matchers before AI. But AI makes it SO much easier to pattern-match your way through without learning. The volume of “working without understanding” has increased exponentially.

Maybe the solution isn’t about the tools at all. Maybe it’s about making learning visible, valued, and required - regardless of what tools you use.

Curious what others think: Could this work across functions? Engineering + Design + Product doing shared learning sessions about AI tool usage and deep thinking?