81,000+ Package Versions with CVEs Are EOL and Unpatchable - Now What?

I just read the Sonatype 2026 Software Supply Chain Report, and one number jumped out at me: 81,000+ package versions with known CVEs are end-of-life and unpatchable. HeroDevs estimates the real number across all registries might be 400,000.

Let me repeat that: Nearly half a million vulnerable software packages that will never be patched.

And the kicker? 5-15% of enterprise dependency graphs contain these EOL packages. That means YOUR production systems probably depend on software with known, exploitable vulnerabilities that will never be fixed.

Welcome to the future of open source security. It’s worse than you think.

Why This Is Different From Normal Vulnerabilities

Normal vulnerability: CVE published, maintainer patches it, you upgrade, problem solved.

EOL vulnerability: CVE published, no maintainer to patch it, your options are:

  1. Live with the vulnerability (hope nobody exploits it)
  2. Fork and patch yourself (expensive, ongoing burden)
  3. Pay for extended support (if available, often expensive)
  4. Rewrite your application (months of work, business might say no)

Every option is costly. Every option could have been avoided with better OSS sustainability.

A Real Example From My Work

I consult with fintech startups in Africa. One client uses Node 12 in production. Node 12 went EOL in April 2022. That’s almost 4 years running unsupported, vulnerable software.

Why haven’t they upgraded?

  • Their app depends on 47 packages
  • 12 of those packages don’t work on Node 16+
  • Updating those 12 packages requires rewriting core business logic
  • Cost estimate: 6 months, 3 engineers, $300K
  • Business priority: “We’ll get to it next quarter” (for 4 years running)

Meanwhile, they’re sitting on Node 12 with 23 known CVEs, 8 rated critical. They know this. Their security team knows this. But the cost to fix it keeps getting deferred.

The Math Doesn’t Work

Here’s the impossible equation enterprises face:

Option A - Accept Risk

  • Cost: $0 immediate
  • Risk: Data breach ($5M+ average cost)
  • Probability: Unknown but increasing
  • Regulatory: Non-compliant (fail audits)

Option B - Upgrade/Rewrite

  • Cost: $500K - $5M depending on scope
  • Timeline: 6-18 months
  • Business disruption: Significant
  • Opportunity cost: Can’t ship new features

Option C - Extended Support

  • Cost: $100-300K/year
  • Availability: Limited to certain projects
  • Coverage: Not all dependencies
  • Long-term: Vendor dependency

None of these options are good. All are expensive. All result from the same root cause: We built on OSS that we didn’t support, and now it’s our problem.

The Scale of the Problem

Sonatype’s numbers are staggering:

  • 81,000+ unpatchable package versions (known floor)
  • 5-15% of enterprise dependencies are EOL
  • Growing faster than we can remediate

Let’s do the math:

  • If you have 500 dependencies (typical medium enterprise)
  • 5% are EOL = 25 vulnerable packages you can’t patch
  • Each requires individual assessment and remediation strategy
  • At 40 hours per package (conservative) = 1,000 engineering hours
  • At $150/hour loaded cost = $150K just to understand your exposure

And that’s before you fix anything.

Why This Will Get Worse

Remember the earlier discussions about maintainer burnout? 60% considering quitting?

Every maintainer who quits is another project going EOL. Every EOL project is more unpatchable vulnerabilities.

We’re in a death spiral:

  1. Maintainers burn out because companies don’t fund OSS
  2. Projects go EOL
  3. Companies face massive remediation costs
  4. Companies still don’t fund OSS (they pay remediation instead)
  5. More maintainers burn out (seeing their work become liability)

The remediation industry (HeroDevs, Tidelift, etc.) is growing because we’re failing at prevention. We’re creating a problem, then paying more to solve it than prevention would have cost.

What Companies Are Actually Doing

I see three patterns in the wild:

Pattern 1 - Denial
“It hasn’t been exploited yet, so we’re fine.”
This is gambling, not risk management. Eventually, your number comes up.

Pattern 2 - Whack-a-Mole
Upgrade when specific CVEs get too scary, ignore the rest.
This is reactive, expensive, and never gets ahead of the problem.

Pattern 3 - Extended Support Vendors
Pay HeroDevs, Tidelift, or similar for EOL support.
This works but is expensive and shifts risk rather than solving it.

What I almost never see: Proactive OSS funding to prevent projects from going EOL in the first place.

The Economics Are Insane

A typical company might:

  • Use 500 OSS dependencies saving $50M+ in licensing costs
  • Spend $0 supporting those dependencies
  • Face $5M+ in remediation when things go EOL
  • Pay $200K/year for extended support vendors
  • Still have unfixable vulnerabilities in production

Meanwhile, if they’d spent $500K/year supporting critical dependencies (the model Keisha shared earlier), most of those projects wouldn’t have gone EOL.

We’re paying 10x more for remediation than prevention would have cost. It’s economically irrational.

The Security Implications Are Dire

From a security perspective, EOL dependencies are:

Known Attack Surface

  • CVEs are public knowledge
  • Exploit code often publicly available
  • Scanners actively look for vulnerable versions
  • Attackers know you probably can’t patch

Compliance Nightmare

  • SOX, PCI-DSS, HIPAA all require patchable software
  • “We know it’s vulnerable but can’t fix it” fails audits
  • Insurance won’t cover known, unpatched vulnerabilities

Supply Chain Poison

  • One EOL package can poison your entire supply chain
  • Can’t ship your software to customers with vulnerability scanners
  • Can’t pass vendor security reviews

This isn’t theoretical. I’ve seen companies lose major contracts because they couldn’t remediate EOL dependencies.

What Needs to Happen

From where I sit, here’s what would actually fix this:

1. Dependency Health Monitoring

  • Every company should know which dependencies are at risk
  • Monitor maintainer activity, funding, bus factor
  • Red flag projects showing EOL warning signs
  • Budget for remediation BEFORE things go EOL

2. Proactive OSS Funding

  • Fund critical dependencies while they’re still maintained
  • Prevent EOL instead of paying for extended support after
  • The math: $500K/year prevention vs $2M+/year remediation

3. Sunset Processes for OSS

  • Projects should have formal EOL procedures
  • Advance warning (12+ months)
  • Migration guides and alternatives
  • Security support during transition period

4. Industry Standards

  • Software Bill of Materials (SBOM) should be mandatory
  • Dependency health should be part of security audits
  • “What’s your plan if this maintainer quits?” should be a standard question

5. Insurance and Regulation

  • Cyber insurance should cover (or exclude) OSS dependency risk
  • Regulators should require dependency management plans
  • Make it a cost of doing business, not optional

The Question Nobody Wants to Answer

Here’s what I ask clients: “If your three most critical OSS dependencies went EOL tomorrow, how long until your business is at serious risk?”

Most can’t answer. They don’t know which dependencies are most critical. They don’t have contingency plans. They’re just hoping it doesn’t happen.

Hope is not a security strategy.

We Created This Problem

Every time a company:

  • Uses OSS without funding it
  • Ignores maintainer burnout warnings
  • Waits until projects are EOL to care
  • Pays for remediation but not prevention

We’re creating more unpatchable vulnerabilities.

The 81,000 (or 400,000) unpatchable packages exist because we built an entire industry on volunteer labor, then acted surprised when the volunteers couldn’t keep up.

The Bottom Line

Living with EOL dependencies is living with known vulnerabilities. It’s accepting that your security posture includes unfixable holes.

Some companies can afford to rewrite. Most can’t.
Some companies can pay for extended support. Most won’t until forced.
Some companies will get breached through EOL dependencies. All are at risk.

The sustainable solution is boring: Fund OSS maintenance BEFORE projects go EOL. It’s cheaper, more secure, and actually solves the problem instead of managing the symptoms.

But until companies treat OSS dependencies like the critical infrastructure they are, we’ll keep generating more unpatchable vulnerabilities.

And security people like me will keep having uncomfortable conversations with executives about known vulnerabilities we can’t fix.

How do we fix this? Same answer as every other OSS sustainability question: Money. Specifically, money going to maintainers before they burn out and walk away.

Everything else is just expensive band-aids on a preventable problem.

Sam, this hits way too close to home. I’m living this nightmare right now at my financial services company.

We’re in Exactly This Situation

Remember my earlier post about Ingress NGINX going EOL? That’s just the tip of the iceberg. We just completed a dependency audit and found:

  • 847 total OSS dependencies (including transitive)
  • 73 are EOL or will be within 12 months
  • 31 have known CVEs with no patches available
  • Cost to remediate: $8-12M over 18 months

And my CFO’s response? “Can we just accept the risk?”

The Compliance Trap

Sam mentioned regulatory requirements, and this is where it gets really painful. We have:

SOX Compliance: Requires us to remediate known vulnerabilities within 30 days for critical, 90 days for high.
PCI-DSS: No unpatched vulnerabilities in systems handling payment data.
Federal Banking Regs: Examiners are specifically asking about open source dependency management now.

When a vulnerability has no patch available, we’re in violation. Our options:

  1. Remediate (expensive)
  2. File exceptions (mounting pile, auditors getting skeptical)
  3. Fail audits (not an option)

We’re literally paying $200K/year to HeroDevs for Node 12 extended support because migrating 40+ applications would cost $5M+ and the business won’t approve it.

The Death Spiral Is Real

Sam described the death spiral perfectly:

  1. Don’t fund OSS
  2. Projects go EOL
  3. Pay for remediation
  4. Still don’t fund OSS

I’m trying to break this cycle. After the Ingress NGINX announcement, I finally got approval for a $500K/year OSS sustainability budget (thanks to Keisha’s earlier post for the template!).

But here’s the thing: $500K/year would have prevented most of our EOL dependencies. Now we’re spending $2M+/year on:

  • Extended support vendors ($400K)
  • Emergency migrations ($800K)
  • Security exception processes ($200K)
  • Additional audit and compliance work ($600K)

We’re paying 4x more in remediation than prevention would have cost. And we STILL have unfixable vulnerabilities.

What Actually Works (And What Doesn’t)

From 18 months of fighting this:

What Works:

  • Dependency health monitoring (we use Tidelift’s catalog)
  • Quarterly security reviews of critical dependencies
  • Budget line for “OSS sustainability” separate from general engineering
  • Executive sponsorship (CTO finally gets it after Ingress NGINX)

What Doesn’t Work:

  • Reactive patching (too late, too expensive)
  • Hoping maintainers will keep going (they won’t)
  • Waiting for business to approve migrations (they delay forever)
  • Extended support as primary strategy (expensive, doesn’t scale)

Sam’s Recommendations Are Spot-On

Especially this: “Fund critical dependencies while they’re still maintained.”

If we’d spent $500K/year over the past 5 years ($2.5M total), we’d have:

  • Prevented most EOL situations
  • Maintained relationships with maintainers
  • Had advance warning and migration time
  • Better security posture

Instead, we’re spending $10M+ cleaning up the mess.

The math is obvious. The hard part is getting executives to see prevention as valuable when there’s no immediate crisis.

My Advice to Other Engineering Leaders

If you’re in a similar boat:

  1. Do the audit NOW: Know your exposure before it’s a crisis
  2. Calculate the real costs: Include compliance, audit, migration, and risk
  3. Present prevention vs. remediation: Show executives the 4-10x cost multiplier
  4. Start small: Even $100K/year prevents some EOL situations
  5. Make it systematic: OSS sustainability as a budget line, not an afterthought

And please, learn from my mistakes. We knew this was coming. We had warnings. We didn’t act until crisis forced us.

Don’t be us.

The data scientist in me wants to break down Sam’s numbers more, because the scale of this problem is being understated if anything.

Let’s Do the Real Math

Sam mentioned 81,000+ package versions with CVEs that are EOL. Let’s contextualize that:

Total npm packages: ~2.5 million
Total PyPI packages: ~500K
Total packages with ANY version EOL: Probably 30-40% (educated guess)
EOL packages that are still being downloaded: 15-20%

So while 81K sounds like a lot, the actual exposure is:

  • Millions of daily downloads of EOL packages
  • Tens of thousands of companies affected
  • Hundreds of thousands of applications at risk

The Compounding Problem

Here’s what makes this exponentially worse: Dependency depth.

Average enterprise application:

  • 50-100 direct dependencies
  • 400-800 total dependencies (including transitive)
  • 5-7 levels deep in dependency tree

If ONE dependency goes EOL at level 3, it affects everything downstream. You might not even know it’s there until you run a dependency audit.

The Timeline Problem

Luis mentioned migrations taking 6-18 months. Let me break down why:

Discovery Phase: 1-2 months

  • Identify all EOL dependencies
  • Assess which are critical
  • Determine remediation approach

Planning Phase: 1-3 months

  • Technical design for replacements
  • Risk assessment
  • Resource allocation and prioritization

Implementation Phase: 3-12 months

  • Rewrite/refactor affected code
  • Test thoroughly (can’t skip this for security issues)
  • Deploy and monitor

Total: 5-17 months minimum

But here’s the kicker: While you’re remediating, more packages go EOL. You’re running to stand still.

The Economic Model Is Broken

Sam’s right that we’re paying 10x more for remediation than prevention. Let me show the numbers:

Prevention Model (what we should do):

  • $500K/year OSS sponsorship
  • Funds 10-15 critical dependencies
  • Prevents 80-90% of EOL situations
  • 10-year cost: $5M

Remediation Model (what we actually do):

  • $0 prevention
  • $2M per major EOL crisis (happens every 2-3 years)
  • Extended support: $200-400K/year ongoing
  • Emergency migrations: $500K-5M per incident
  • 10-year cost: $15-30M

The ROI on prevention is 3-6x. But companies don’t see it because prevention is invisible (nothing bad happens) while remediation is a visible “project.”

What Machine Learning Says

I actually built a predictive model for OSS project health. Using:

  • Commit frequency
  • Maintainer count
  • Issue response time
  • Funding sources
  • Community engagement

My model can predict with 78% accuracy which projects will go EOL within 12 months.

The top risk factors:

  1. Single maintainer (bus factor = 1)
  2. No funding sources
  3. Declining commit frequency
  4. Increasing issue backlog
  5. Maintainer expressing burnout

Every company should be running this analysis quarterly on their critical dependencies. But almost none do.

Sam’s Five-Point Plan Is Right

I want to emphasize his point about SBOM (Software Bill of Materials) being mandatory:

Right now, most companies don’t even KNOW what OSS they depend on. They couldn’t tell you:

  • Which packages are EOL
  • Who maintains them
  • What the funding situation is
  • What the bus factor is

SBOM should be required by:

  • Cyber insurance (know your risk)
  • Security audits (standard practice)
  • Vendor due diligence (don’t onboard risk)
  • Regulatory compliance (especially financial services)

Make it mandatory, and suddenly companies have to care about OSS sustainability.

The Prediction Nobody Wants to Hear

Based on the trends:

  • 60% of maintainers considering quitting
  • 44% citing burnout
  • Accelerating with AI-generated noise

My model predicts we’ll see 2-3x more EOL events in 2026-2027 than we saw in 2024-2025.

The 81,000 unpatchable packages? That number is going to get a lot bigger before it gets better.

Unless companies start funding OSS proactively (which Sam, Luis, and Keisha have all advocated for), we’re heading into a crisis that will make the log4j incident look small.

The data is screaming at us. Are we listening?

Sam’s post is a wake-up call, Luis’s experience is sobering, and Rachel’s data confirms what many of us have been warning about for years.

From my position as VP Engineering, I want to talk about what leadership needs to do RIGHT NOW.

This Is a Leadership Failure

Let’s be honest: The 81,000+ unpatchable packages exist because engineering leaders (myself included, in the past) failed to prioritize OSS sustainability.

We treated OSS as “free” instead of as “unfunded infrastructure we depend on.”

The result: We have critical business dependencies with no maintenance plan, no funding commitment, and no contingency if maintainers walk away.

That’s not technical debt. That’s executive negligence.

What I’m Doing (And What You Should Too)

After Ingress NGINX and reading these discussions, here’s what I’ve implemented:

Immediate Actions (Done):

  1. Dependency audit completed ($50K consulting, worth every penny)
  2. Risk assessment of top 50 dependencies (bus factor, funding, activity)
  3. $500K/year OSS sustainability budget approved
  4. Quarterly dependency health reviews (standing meeting on my calendar)

Short-term (Next 3 months):

  1. Direct sponsorships for 12 critical dependencies
  2. Engineering time allocation (20% time) for OSS maintenance
  3. SBOM generation automated in CI/CD pipeline
  4. EOL monitoring and alerting system

Long-term (Next 12 months):

  1. Industry partnerships for shared OSS funding
  2. Executive education on OSS sustainability
  3. Recruitment focused on maintainer relationships
  4. Integration of OSS health into our risk management framework

The Business Case I Made to My CEO

“We save $50M/year using OSS instead of commercial alternatives. I’m asking for $500K/year (1% of savings) to protect that $50M investment from sudden EOL events that would cost $5-10M+ to remediate. The ROI is 10-20x.”

She approved it in one meeting. The key was framing it as risk management for existing investments, not charity for OSS maintainers.

What Board and C-Suite Need to Hear

If you’re presenting this to executives who don’t get it:

Risk Framework:

  • Current state: Critical dependencies with no contingency planning
  • Likelihood: High (60% of maintainers considering quitting)
  • Impact: $5-10M+ per major EOL event
  • Remediation timeline: 6-18 months (business disruption)
  • Prevention cost: $500K/year (1% of OSS value to our business)

The Question: “Are we comfortable with this risk profile, or should we invest in prevention?”

No rational executive says “yes, we’re comfortable with this” once they see the numbers.

Calling Out the Excuses I Hear

“We can’t afford OSS sponsorship”
You can’t afford NOT to. You’re already spending 5-10x on remediation.

“It’s not our responsibility”
It is when your business depends on it. Would you run production systems on unsupported commercial software?

“The OSS community will figure it out”
They won’t. They’re quitting. See: Ingress NGINX, External Secrets Operator, and hundreds of smaller projects.

“We’ll deal with it when it becomes a problem”
It’s ALREADY a problem. You’re just hoping you won’t get hit. That’s not strategy, that’s gambling.

The Industry Needs to Change

Individual companies stepping up (like mine) helps. But we need industry-wide shifts:

  1. Insurance Requirements: Cyber insurance should require SBOM and dependency health assessments
  2. Regulatory Standards: Especially financial services, healthcare - mandate dependency management
  3. Procurement Standards: Large enterprises should require OSS sustainability from vendors
  4. Industry Consortiums: Companies using the same OSS should pool funding
  5. Public Recognition: Leaders who fund OSS proactively should be celebrated

We’re at an Inflection Point

Sam’s right: We’re in a death spiral of underfunding leading to EOL leading to expensive remediation leading to continued underfunding.

But it doesn’t have to be this way.

Every engineering leader reading this can:

  • Request a dependency audit
  • Calculate remediation vs. prevention costs
  • Present to executives
  • Get budget for OSS sustainability
  • Start sponsoring critical dependencies

Luis did it. I did it. It’s possible. It’s necessary. It’s overdue.

The Alternative

If we don’t fix this:

  • More maintainers will quit
  • More projects will go EOL
  • More companies will face $5-10M remediation costs
  • More security vulnerabilities will be unfixable
  • The whole open source ecosystem will become less sustainable

We built trillion-dollar industries on open source. We can afford to fund it properly.

The question is: Will we act before the crisis, or pay 10x more after?

Sam, Luis, Rachel - thank you for these posts. Sharing with my entire network of engineering leaders. This conversation needs to happen in every boardroom.

And to everyone else: What are YOU doing about this? Because doing nothing is a choice, and it’s the expensive one.