Talent Shortages Doubling Hiring Timelines in 2026: When Should You Invest in Upskilling vs Keep Recruiting?

I need to share something that’s been keeping me up at night. We’ve been trying to fill three senior engineering roles at our EdTech startup for the past 90 days. Ninety days. In a previous life at Google, we could close these positions in 3-4 weeks. The market has fundamentally changed.

Here’s what I’m seeing across the industry, and I’d love your perspective on whether we’re thinking about this wrong.

The Talent Shortage Numbers Are Brutal

The data is clear: there are roughly 3 engineering jobs for every 1 qualified candidate right now. Hiring cycles for mid-to-senior roles are stretching to 40-50 days on average, and that’s if you’re moving fast. One in three engineering roles goes unfilled each year, and projections show this continuing through at least 2030.

For context, we started our search in December. It’s now mid-March. We’ve interviewed 12 candidates. Made 2 offers. Both accepted other offers before we could close. The candidates we want have 4-5 competing offers, and they’re gone within days of starting interviews.

The Question That’s Challenging My Assumptions

I’ve always believed in hiring the best external talent. Build a world-class team by recruiting A-players. But I’m starting to wonder if we’re fighting the wrong battle.

What if the better investment is upskilling our current team instead of continuing to compete in this brutal external market?

Here’s the preliminary ROI analysis that’s making me reconsider our strategy:

  • Upskilling costs roughly 1/3 the price of external hiring when you factor in recruiter fees, interview time, onboarding, and ramp-up
  • 70% of organizations are already planning to rely primarily on upskilling rather than external hiring (Gartner)
  • Companies prioritizing learning & development are 92% more likely to innovate and 52% more productive
  • Real-world data: Organizations report 7% reduction in attrition and €300K saved in recruiting costs when they shift to upskilling

But here’s what makes me hesitate: we need specialized ML expertise for our adaptive learning platform. Can you really upskill someone into ML engineering in a timeframe that matters? Or are there certain roles where external hiring is still the only viable path?

The Competing Pressures

On one hand, our product roadmap is blocked by these open positions. Our Q2 launch is at risk. The board is asking why we can’t move faster.

On the other hand, we have incredibly talented mid-level engineers who are hungry to grow. Last week, one of our senior engineers told me that two mid-level folks on her team have been learning ML on nights and weekends because they want to work on our recommendation engine. They’re already here. They know our codebase. They understand our users.

What if the timeline to upskill them is actually shorter than the timeline to find, recruit, close, and ramp external candidates?

What I’m Struggling With

The questions keeping me up:

  1. How do you calculate the real ROI of upskilling vs. external hiring when opportunity cost of delayed features is in the equation?

  2. What roles are upskillable vs. which require deep domain expertise you can only get externally?

  3. How do you get executive buy-in for a 3-6 month upskilling investment when the board wants the position filled “yesterday”?

  4. What’s the right balance? Is it 80/20 upskill/external? 50/50? Does it vary by company stage?

  5. For those who’ve made this shift: What infrastructure did you need? Dedicated learning time? External training? Mentorship programs? How did you measure success?

Why This Matters Beyond Just Filling Roles

I keep coming back to this: every engineer we upskill becomes a culture carrier who builds institutional knowledge. External hires are amazing, but they don’t have the context of why we made certain architectural decisions or the relationships with cross-functional partners that took years to build.

And let’s be honest about the human element: when we invest deeply in someone’s growth, when we give them opportunities they couldn’t get elsewhere, that creates loyalty in a way that compensation alone never can.

But I also don’t want to be naive. There are roles where we genuinely need someone who’s been there and done that. Where the learning curve is too steep or the timeline too compressed.

So here’s what I’m asking this community: How are you navigating this? What’s working? What have you tried that didn’t work? And most importantly—am I overthinking this, or is this actually the strategic shift we need to make?

Looking forward to learning from your experiences.

Keisha, this hits home hard. We’re seeing the exact same dynamics at our SaaS company, and I want to push back gently on the framing of this as “either/or.”

You Need Both, But the Balance Shifts with Company Stage

Here’s what we learned the hard way: not all skills can be upskilled internally, especially domain expertise. We tried to upskill a great engineer into our fintech compliance architecture role. Six months later, we had to admit it wasn’t working—the regulatory knowledge was too deep, too specialized, and too high-stakes to learn on the job.

But for core engineering capabilities? Absolutely worth the investment.

What’s Working at Scale

We’ve implemented a hybrid approach that’s showing real results:

Strategic External Hiring (20-30% of growth):

  • Domain experts we can’t build internally
  • Senior+ roles requiring 10+ years specific industry experience
  • New technical domains where we need a “been there, done that” leader

Systematic Upskilling (70-80% of growth):

  • Clear career ladder frameworks (we can share ours if helpful)
  • Dedicated learning time—20% of sprint capacity for senior engineers mentoring
  • Partnership with technical training providers
  • Quarterly skills inventory audits to identify internal candidates

The key insight: you can’t upskill what you can’t see. We started tracking engineers’ adjacent skills and career interests in structured 1:1s. Turns out, we had three engineers with ML backgrounds from previous roles who weren’t using those skills. We didn’t know because we never asked systematically.

The Data That Convinced Our Board

What got our CFO on board:

  • Time-to-productivity: Upskilled engineers hit full productivity 40% faster than external hires (they already know our systems, customers, culture)
  • Retention impact: 92% of engineers who complete our internal advancement program are still here 2 years later vs. 68% of external senior hires
  • Cost per filled role: $180K average for external (recruiter, interview time, failed searches, signing bonuses) vs. $60K for upskilling (training, reduced productivity during learning, mentor time)

But here’s the reality check: you still need external hiring for specialized gaps. We’ve found ~70/30 upskill/external is the right balance for our stage (mid-stage, scaling from 50 to 120 engineers).

Warning: The False Choice Trap

Be careful not to let the pendulum swing too far. I’ve seen companies go all-in on “we’ll just train everyone” and it backfires when:

  1. The learning curve is too steep for business-critical timelines
  2. You don’t have senior expertise to teach the thing you’re trying to upskill into
  3. The person doesn’t actually want the new role (we learned this painfully—just because someone is good at X doesn’t mean they want to do Y)

Your ML Question Specifically

Can you upskill into ML engineering? Yes, IF:

  • They have strong fundamentals (math, stats, programming)
  • You have a senior ML engineer to mentor them (sounds like you do)
  • You can give them 3-6 months of dedicated learning + real project work
  • The role isn’t mission-critical from day one

Your two mid-level engineers who are already learning ML on nights/weekends? That’s gold. Intrinsic motivation + existing codebase knowledge + user context = massive advantage over external hire.

My recommendation: Run a 90-day pilot. Give those two engineers dedicated ML project time with clear milestones. Bet you’ll know by month 2 if it’s working. In parallel, keep your external search warm but not urgent.

If the pilot works, you’ve solved the problem at 1/3 the cost. If it doesn’t, you’ve lost 90 days but gained valuable data about your team’s upskilling capacity.

What specific gaps are you trying to fill with those three senior roles? Happy to share more detailed frameworks if helpful.

Both great perspectives here. Let me add the lens that often gets missed in this discussion: this is actually a technical architecture decision, not just an HR problem.

Bad Hires Create Technical Debt

Here’s what I wish more leaders understood: when you optimize hiring for speed, you’re taking on technical debt—just like when you ship fast without tests.

I’ve seen this play out painfully. We hired a “senior” engineer under time pressure for a critical cloud migration project. Great resume, good interview performance. Six months later, we had to refactor 40% of what he built because the architectural decisions were fundamentally wrong for our scale requirements.

The cost wasn’t just the hiring mistake—it was the 6-month setback on a strategic initiative.

Why Upskilling Has a Hidden Advantage

When you upskill someone internal, you have data external candidates can’t provide:

  • You know their learning velocity from past ramp-ups
  • You understand their communication style and how they handle feedback
  • You’ve seen them under pressure and know their decision-making patterns
  • You know which knowledge gaps they have vs. which are learnable

External candidates are fundamentally unknown variables dressed up in resumes and 5-hour interview loops.

The Infrastructure Tax Nobody Talks About

Michelle’s point about global sourcing requiring infrastructure is critical. You can’t just hire distributed and hope for the best. You need:

  • Async-first communication patterns (not just “we use Slack”)
  • Documentation culture that’s actually enforced
  • Timezone-aware process design (when can we make decisions? who’s the decider when timezones don’t overlap?)
  • Deliberate inclusion practices for remote voices

Building this infrastructure takes 6-12 months. If you don’t have it, global sourcing will create more problems than it solves.

The Contrarian Take: “Waiting It Out” Isn’t Always Wrong

I want to challenge the assumption that unfilled roles automatically equal missed opportunities.

Sometimes the best decision is building with a smaller, excellent team rather than scaling with mediocre hires. I’ve seen this pattern repeatedly:

  • Team of 5 exceptional engineers ships more than team of 12 average ones
  • Coordination overhead grows exponentially with team size
  • Communication complexity is n(n-1)/2—every new person creates new failure modes

Before you fill that role, ask: Do we actually need this headcount, or have we just normalized the idea that growth = more people?

The Specialization Trend Creates an Opportunity

Here’s something I’m noticing: the industry’s obsession with specialization means generalists are being undervalued right now.

Everyone wants the person who’s done exactly this job at exactly this scale with exactly this tech stack. But the engineer who can think across systems, learn quickly, and adapt? That’s actually rarer and more valuable long-term.

This is your upskilling arbitrage opportunity. Hire/promote the excellent generalist, give them focused training in the specialized domain, and you’ve built something stronger than the specialist who can’t see beyond their silo.

My Recommendation for Your ML Roles

Given what you’ve shared:

  1. Run the 90-day pilot with those two hungry mid-levels (as Michelle suggested—that’s the right call)
  2. Keep one external search active for a senior ML leader who can mentor them and make architectural decisions
  3. Don’t fill the third role yet—see if you actually need it after the upskilling pilot

You might discover that one experienced ML architect + two upskilled engineers beats three external senior hires. The experienced person provides the “been there” judgment, the upskilled folks provide the codebase knowledge and motivation.

Hybrid approach, but weighted toward building internal capability with strategic external expertise.

The wrong answer is filling all three roles externally just to hit a headcount target.

Coming at this from the design/product side, and I have to say—I’ve lived both sides of this equation.

I Was the Upskilling Success Story

Full transparency: I was upskilled from designer to design systems engineer about 3 years ago. It changed my career trajectory completely. So I’m biased, but I also know what makes it work vs. what makes it fail spectacularly.

What made my upskilling work:

  1. Clear expectations upfront - My manager didn’t sugarcoat it. “This will be hard. You’ll feel like you don’t know what you’re doing for 6 months. That’s normal.”

  2. Protected learning time - 20% of my sprint capacity was explicitly for learning. Not “find time when you can,” but actual protected slots on my calendar.

  3. Mentor who wasn’t my manager - My manager couldn’t teach me React and system architecture. They paired me with a senior engineer who had patience and genuinely wanted to teach.

  4. Real project, not toy work - I learned on production code that mattered. Toy projects don’t create the necessary pressure to actually learn.

  5. Permission to fail - My first few PRs were… rough. But nobody made me feel stupid. The code review feedback was constructive, not demoralizing.

The Part Nobody Talks About: Motivation Matters More Than Current Skills

Here’s what I wish more leaders understood: you cannot upskill someone who doesn’t actually want the new role.

I’ve seen this fail so many times. Manager sees someone is “good at X” and assumes they want to do “Y” (the next logical step on some career ladder the manager invented).

Your two engineers learning ML on nights and weekends? That’s the signal. They’re already investing their own time. That intrinsic motivation is worth 10x more than someone who has the background but doesn’t care.

But Also: I’ve Been the Failed External Hire

Plot twist: at my startup, we also hired external specialists for highly technical roles, and honestly? It was faster for niche skills.

We needed someone who deeply understood accessibility compliance (WCAG 2.2, ARIA, screen readers, the whole stack). I could learn the basics, but we were building an EdTech product that needed to be accessible Day 1 for legal/ethical reasons.

Hired an external specialist. She was productive within 2 weeks because she’d done this exact thing at 3 other companies. Sometimes you genuinely need the “been there, solved that” expertise.

The Honest Trade-offs

Upskilling works when:

  • The person genuinely wants to grow in this direction (not just “well, I guess”)
  • You have 3-6 months of runway (not “we need this feature next sprint”)
  • You have internal expertise to teach/mentor (you can’t upskill into a gap you don’t have filled)
  • The role isn’t mission-critical immediately (learning means mistakes, mistakes need to be acceptable)

External hiring works when:

  • Timeline is compressed and the cost of delay is real
  • The skill is niche/specialized and you don’t have internal expertise to teach it
  • You need someone who can make high-stakes decisions confidently from day one
  • You’re entering a new domain and need someone to build the practice from scratch

The Cultural Benefit You Mentioned Is Real

One thing Keisha said really resonated: upskilled employees become culture carriers.

When my startup scaled from 8 to 35 people, the folks who’d been upskilled internally were the ones who helped new hires understand “why we do things this way.” They had context external hires took months to build.

That institutional knowledge is genuinely valuable. Not just for code—for product decisions, customer understanding, team dynamics.

My Contrarian Take: Sometimes You Don’t Need the Role At All

Here’s a question I don’t think gets asked enough: Do you actually need to fill all three roles, or did you just plan for three because that felt right at the time?

One of the hardest lessons from my failed startup: we kept hiring because our roadmap said we needed these roles. Turned out the roadmap was wrong. The features we thought were critical weren’t what customers wanted.

Before you invest in upskilling OR external hiring, validate that the work actually needs to be done.

Practical Advice for Your Situation

Based on what you’ve shared, here’s what I’d recommend:

  1. Give those two ML-curious engineers a real project with a 90-day checkpoint - Not a side project, a real feature that matters but isn’t mission-critical
  2. Bring in ONE external ML expert as both doer and teacher - Not three roles, one person who can make architectural decisions AND mentor the upskilled folks
  3. Build the program infrastructure while you run the pilot - Career frameworks, learning budgets, mentor matching, success metrics

If it works, you’ve solved the problem at lower cost with higher retention. If it doesn’t, you’ve learned what types of roles ARE upskillable in your context.

The real wisdom here isn’t choosing upskilling over external hiring. It’s building the capability to do both strategically, depending on what the specific role demands.

This is such a rich discussion—I want to add the product/business lens because I think we’re missing a critical angle here.

Hiring Delays = Roadmap Delays = Revenue Delays

Let me be blunt about something nobody wants to say out loud: every week you don’t fill these roles, you’re making product decisions by default.

We faced this exact situation 6 months ago. Critical ML role for our fintech platform. Couldn’t fill it for 3 months. That delay cascaded:

  • Feature pushed from Q1 to Q2
  • Customer pilot agreements delayed (they needed that feature)
  • Revenue recognition pushed by a quarter
  • Board deck got awkward fast

The opportunity cost of unfilled roles isn’t just salary savings—it’s delayed revenue, competitive positioning, and strategic momentum.

The ROI Calculation Has to Include Opportunity Cost

When you’re comparing upskilling vs. external hiring, the math everyone does:

  • External hire: $180K (recruiter, signing bonus, etc.)
  • Upskilling: $60K (training, reduced productivity, mentorship)
  • “Obviously upskilling is better!”

But that’s incomplete. The real comparison:

External hire:

  • $180K hiring cost
  • 2-4 weeks to close offer
  • 2-4 weeks notice period
  • 4-8 weeks ramp time
  • Total time to productivity: 8-16 weeks

Upskilling:

  • $60K training cost
  • 12-24 weeks to proficiency (realistic, not optimistic)
  • Reduced output from mentor + learner during that period
  • Total time to productivity: 12-24 weeks

The question becomes: What’s the value of those extra 4-8 weeks of productivity? If you’re racing to launch before a competitor or hitting a regulatory deadline, the “cheaper” option might cost you millions in delayed revenue.

When Upskilling Makes Strategic Sense

I’m not anti-upskilling—we’ve done it successfully. But here’s when it worked for us:

The feature wasn’t on the critical path. We could afford to have someone learning while building. The timeline had buffer built in.

We needed the capability long-term, not just for one project. Building internal ML expertise made sense because we’re going deeper into AI-driven features over the next 2 years.

The upskilled person would own the area. Not just contribute—actually become the domain expert. That ownership motivation is huge.

The Cross-Functional Impact Nobody’s Talking About

Here’s what kills me about this discussion: engineering hiring strategy IS product strategy.

When you can’t hire ML engineers, you’re actually making product decisions:

  • We can’t build intelligent recommendations → simpler rule-based system
  • We can’t do real-time personalization → batch processing with delays
  • We can’t compete on AI features → compete on UX, data, or integrations instead

Sometimes that’s fine! Sometimes simpler is better. But let’s be honest that it’s a strategic tradeoff, not just a hiring problem.

My Contrarian Product Take

What if the right answer isn’t “upskill vs. hire” but “descope the feature”?

I’ve seen this pattern too many times: we staff for the roadmap we planned 6 months ago, not the roadmap we should have based on what we learned from customers.

Before you invest 3-6 months upskilling OR 3 months recruiting, validate the feature is still the right bet.

Maybe what your customers actually need isn’t sophisticated ML recommendations—maybe they need faster load times, better data export, or simpler onboarding. The constraint of not having ML talent might be forcing you toward a better product.

Specific Questions for Your Situation

You mentioned Q2 launch at risk. Let me ask:

  1. What’s the customer/revenue impact of delaying to Q3? Real numbers, not just “the board won’t like it.”

  2. Can you descope to an MVP that doesn’t require ML? Sometimes “dumb” recommendations based on simple rules get you 70% of the value.

  3. What happens if the upskilling pilot fails at month 2? You’ve lost 8 weeks—can your Q2 timeline absorb that?

  4. Is there a contract or consulting option? Bring in ML expertise for 3 months to ship V1, while training internal team for V2+?

My Actual Recommendation

Parallel path strategy:

  1. Start the 90-day upskilling pilot with those motivated engineers (everyone agrees this is smart)

  2. Simultaneously descope the Q2 feature to an MVP that doesn’t require ML (get something shipped, learn from customers)

  3. Use the upskilled engineers for V2 once they’re ready, informed by V1 customer feedback

  4. Only hire externally if: (a) upskilling fails AND (b) customer feedback says ML is critical AND (c) revenue impact justifies the cost

This gives you:

  • Momentum on the roadmap (V1 ships)
  • Internal capability building (upskilling continues)
  • Customer validation (before you bet big on ML)
  • Fallback option (external hiring if needed)

The Real Question

The question isn’t “upskilling vs. external hiring.”

The question is: “What’s the smallest bet we can make to learn whether this feature drives the outcomes we need, and how do we build capability in parallel?”

Product thinking + talent strategy = better decisions than optimizing either in isolation.

What’s the core customer problem you’re trying to solve with those three ML roles? Let’s work backward from there.