Supply Chain Security in the AI Coding Era: New Risks We're Not Ready For

As Director of Engineering at a major financial services company, I’m watching a collision course develop: AI coding assistants are dramatically accelerating how fast developers add dependencies, while our security processes are still calibrated for human-speed development. We’re not ready for this.

The AI Acceleration Problem

Since we rolled out GitHub Copilot and Claude across our engineering teams 9 months ago, I’ve been tracking some concerning metrics:

Dependency Introduction Rate:

  • Q4 2024 (pre-AI): Average 8 new dependencies per team per quarter
  • Q1 2025 (early AI adoption): Average 15 new dependencies per team per quarter
  • Q2 2025 (widespread AI): Average 24 new dependencies per team per quarter

We’re adding dependencies 3x faster than we were a year ago. Our security review processes haven’t scaled to match.

How AI Changes Developer Behavior

Here’s what I’m observing on the ground:

Scenario 1: The Suggested Package

Developer: “I need to parse dates in different formats”

Copilot: Suggests moment.js with full implementation

Developer: Accepts suggestion, adds moment.js to package.json

What didn’t happen:

  • Research if we already have a date parsing library
  • Check if moment.js is maintained (it’s deprecated!)
  • Consider built-in alternatives (Intl.DateTimeFormat)
  • Review the security history of the package

The speed of acceptance bypasses normal developer diligence.

Scenario 2: The Phantom Package

This actually happened on my team:

Developer: “Need to validate email addresses”

Claude: Suggests using package with implementation

Developer: Tries to install, package doesn’t exist

Claude: Suggests instead

Developer: Installs, builds, deploys

Security review later: is a typosquatting package. Actual legitimate package is . We deployed malicious code to staging.

The AI suggested a package that looked legitimate but wasn’t. And because the AI suggestion felt authoritative, the developer trusted it without verification.

Scenario 3: The Transitive Explosion

Developer: Adds one innocent-looking AI-suggested package

Reality: That package has 47 transitive dependencies, including 3 with known CVEs and 1 that was published 2 weeks ago by an unknown author.

Developers don’t see the transitive dependency tree. AI certainly doesn’t warn about it.

The Five New Risk Categories

AI-accelerated development introduces specific supply chain risks:

1. Phantom Packages (AI Hallucinations)

AI models sometimes suggest packages that don’t exist or mix up package names. Developers who trust the AI can end up searching for these packages and installing typosquatting alternatives.

Mitigation: Package registry verification in CI/CD. Flag any package published <90 days ago or from new publishers.

2. Malicious Package Injection

Attackers are creating packages with names similar to commonly AI-suggested packages. They know developers using AI are less likely to carefully verify package sources.

Mitigation: Curated internal package registries. Only approved packages can be installed.

3. Outdated Vulnerable Dependencies

AI training data often includes older code examples. Copilot might suggest libraries that were popular in 2020 but are now deprecated with known vulnerabilities (like moment.js).

Mitigation: SCA tools with automatic fix suggestions. Block builds with high/critical CVEs.

4. License Compliance Landmines

AI doesn’t understand your company’s license policies. It might suggest a GPL package when you can only use MIT/Apache.

Mitigation: Automated license scanning in CI/CD. Reject builds with non-compliant licenses.

5. Transitive Dependency Cascade

One AI-suggested package can introduce dozens of transitive dependencies, each with their own security and license implications.

Mitigation: Reachability analysis to understand which dependencies actually get used. Dependency pruning.

What We’re Doing (and It’s Not Enough)

Current mitigations we’ve implemented:

1. Software Composition Analysis (SCA) Tools

Running Snyk and WhiteSource on every PR. Problems:

  • High false positive rate (developers ignore alerts)
  • Can’t distinguish between “vulnerable package in test code” vs “vulnerable package in production”
  • Slow scans delay PR merges

2. Internal Package Registry (Artifactory)

Curated allowlist of approved packages. Problems:

  • Slows down development (request, review, approve process)
  • Package approval backlog growing faster than we can review
  • Developers find workarounds (copying code directly instead of using packages)

3. Automated Dependency Updates (Renovate)

Keeps dependencies current. Problems:

  • Update fatigue - 30+ PRs per week across teams
  • Breaking changes require manual intervention
  • Security updates mixed with feature updates (hard to prioritize)

4. Code Review Requirements

Require review of package.json/requirements.txt changes. Problems:

  • Reviewers don’t have context to evaluate package safety
  • “Looks fine to me” reviews because nobody wants to be the blocker
  • AI-generated code looks professional, so reviewers assume it’s vetted

These mitigations slow us down but don’t make us substantially safer.

The Regulatory Pressure

Making this more urgent: Regulatory requirements are tightening.

EU Digital Operational Resilience Act (DORA) - Effective January 2026:
Requires financial institutions to have comprehensive ICT risk management, including third-party and software supply chain risk.

NIST Secure Software Development Framework (SSDF):
Requires documented supply chain risk management and SBOM (Software Bill of Materials) for all applications.

Our regulators are asking questions like:

  • “How do you verify the integrity of third-party dependencies?”
  • “What’s your process for responding to supply chain vulnerabilities?”
  • “Can you produce an SBOM for your applications?”

These are reasonable questions. Our answers are increasingly inadequate as AI accelerates dependency introduction.

The Gap: Tools Lag Behind AI Coding Speed

The fundamental problem: Security tools were designed for human-speed development. Developers would thoughtfully add dependencies after research and discussion.

Now developers add dependencies at AI suggestion speed - seconds, not hours. Our security tools can’t keep up.

What we need:

  • Real-time package risk scoring in the IDE (before commit)
  • AI-aware SCA tools that understand AI-suggested packages are higher risk
  • Automatic SBOM generation integrated into CI/CD
  • Runtime monitoring that detects unexpected dependency behavior

Most of these don’t exist yet, or exist but aren’t mature enough for regulated enterprise use.

The Cultural Challenge

Beyond tools, this requires cultural change:

Old mindset: “Trust developers to make good decisions about dependencies”

New reality: “Developers are accepting AI suggestions without full verification”

Required shift: “Treat AI-suggested dependencies as untrusted input that requires verification”

But how do we do that without killing the productivity gains AI provides? If we make dependency approval so onerous that it takes 2 weeks, developers will just copy-paste code instead (even worse for security).

The AI Agent Future

This is about to get worse. Current AI assistants make suggestions that developers accept or reject.

Future AI agents will autonomously:

  • Add dependencies to solve problems
  • Update dependencies to fix vulnerabilities
  • Refactor code that introduces new dependencies

When AI agents operate autonomously in CI/CD pipelines, the dependency introduction rate will 10x, not 3x. We’re absolutely not ready for that.

Questions for This Community

For security professionals: How are you adapting supply chain security for AI-accelerated development? What tools or processes are working?

For developers: Are you more or less careful about dependencies when accepting AI suggestions? Do you even notice when AI adds packages?

For engineering leaders: How do you balance AI productivity gains with supply chain security risk? Where’s the acceptable risk/reward trade-off?

For anyone: Are there emerging tools or practices that address AI-era supply chain security? What should we be evaluating?

What I’m Most Worried About

We’re sitting on a supply chain time bomb. Every AI-suggested package is a potential entry point for:

  • Malicious code injection
  • Data exfiltration
  • Cryptomining
  • Ransomware
  • Supply chain attacks on our downstream customers

And because AI makes adding dependencies so frictionless, we’re accumulating risk faster than we can assess it.

Financial services is a high-value target. If attackers figure out how to systematically compromise AI-suggested packages, we’re in serious trouble.

How do we secure AI-accelerated development without killing the productivity gains that make AI valuable?

I’m guilty of everything Luis described. Let me be honest about my developer behavior with AI assistants and dependencies.

My Honest Confession

In the last month, I’ve used Copilot to add approximately 15 new packages across 3 projects. How many did I carefully review before installing?

Maybe 2.

The rest? I trusted the AI suggestion because:

  1. The code worked in local testing
  2. The package name looked legitimate
  3. I was in flow state and didn’t want to context-switch to research
  4. The auto-complete felt authoritative

This is shameful to admit, but I suspect I’m not alone. AI has trained me to trust its suggestions without verification.

Why Developers Skip Due Diligence

Luis asked: “Are you more or less careful about dependencies when accepting AI suggestions?”

Honest answer: Much less careful. Here’s why:

1. The Suggestion Feels Pre-Vetted

When Copilot suggests a package, my brain interprets it as “GitHub/Microsoft has already validated this is safe.” That’s not true, but it FEELS true.

Traditional workflow: I search npm, read docs, check GitHub stars, review security advisories, THEN install.

AI-assisted workflow: AI suggests, I install. The research step disappears.

2. Speed is the Whole Point

I use AI to go faster. If I have to stop and audit every AI suggestion, I’m back to manual speed. That defeats the purpose.

The productivity gain from AI assistants comes from NOT second-guessing them. But that’s exactly the security problem.

3. I Don’t Even Notice the Package Additions

Sometimes AI suggestions include package imports I didn’t explicitly request.

Copilot: “Here’s how to format that date”

Me: Accepts, runs code, notices it needs moment, npm installs it

I didn’t decide to add moment.js. Copilot made that decision, and I followed along.

This is insidious. Package decisions are being made by AI, with humans as rubber stamps.

The Phantom Package Story is Terrifying

Luis’s story about the typosquatting package hit me hard. That could easily be me.

Here’s what would have happened in my workflow:

  1. AI suggests
  2. I install it:
  3. It works in my tests
  4. I commit, open PR, it passes CI
  5. Code ships to production

At what point would I have discovered it’s malicious? Probably never, until a security incident.

I don’t read the source code of packages I install. Do you? Nobody has time for that.

Do I Even Notice When AI Adds Packages?

Embarrassing answer: Not always.

I review my PRs before submitting, but I’m mostly looking at the features I built, not the package.json diff.

If package.json shows I think “oh, needed a utility library” and move on.

Luis’s point about AI-generated code looking professional is key. Everything Copilot suggests looks like it was written by a competent engineer. There’s no visual signal that says “this might be unsafe.”

What Would Actually Change My Behavior

For me to be more careful about AI-suggested dependencies, I need:

1. Real-Time Warnings in the IDE

If my IDE showed a warning the moment I type :

:warning: WARNING: This package was published 3 days ago by a new publisher. High risk of malicious code. Proceed with caution.”

That would make me pause and investigate.

Sam mentioned Socket.dev and Snyk IDE extensions. I’m going to install those today.

2. AI That Explains Its Reasoning

Instead of:

I want:

Context about WHY this package and what the trade-offs are.

3. Automatic Security Review in PR Template

Our PR template could include:

This forces me to look at what I’m adding.

4. Team Culture Shift

Right now, “accepting Copilot suggestions without review” is the norm on my team. Nobody questions it.

We need a culture shift: “AI suggestions are helpful but require human verification, especially for dependencies.”

Maybe in sprint retros, ask: “Did anyone add dependencies this sprint? Did you vet them?”

Make it a normal part of developer practice.

The Productivity Trap

Here’s the fundamental tension:

AI makes me 30% faster at writing code.

Properly vetting dependencies would make me 20% slower.

Net result: 10% faster with worse security.

Luis asked how to balance AI productivity with supply chain risk. I don’t have a good answer.

I WANT to be secure. But I also want to ship features fast. When those conflict, velocity usually wins.

Maybe the answer is: We need tools that make security as fast as AI makes coding. If security checks happen automatically in the background without slowing me down, I’m more likely to pay attention.

One Thing I’ll Commit To

After reading this thread, I’m committing to:

New personal rule: Before installing any AI-suggested package, I will:

  1. Check npm package age (must be >90 days old OR >1M downloads/week)
  2. Check publisher reputation (must have other established packages)
  3. Check if we already have similar functionality in our codebase
  4. Document in commit message why I’m adding this specific package

This adds maybe 2 minutes per package. If that prevents one malicious dependency, it’s worth it.

Who else is willing to commit to this?

As VP Product, I’m looking at this from a risk vs velocity perspective. Luis’s concerns are valid, but I want to challenge some assumptions about how we think about supply chain risk.

The Business Case for AI-Accelerated Development

Let me be blunt: AI coding assistants are giving us 20-30% productivity gains. That’s massive. That’s the difference between hitting our Q2 roadmap or missing it.

From a product perspective, that velocity matters:

  • We can experiment faster (more MVPs, faster iteration)
  • We can respond to customer requests quicker
  • We can stay competitive with startups that are also using AI

Question: What’s the business cost of NOT using AI because of supply chain security concerns?

I’m not saying security doesn’t matter. I’m saying we need to quantify both sides of the trade-off.

Risk-Based Approach to Dependency Security

Not all dependencies carry equal risk. Not all features require equal security scrutiny.

Proposed framework:

Tier 1: Critical Security (Zero Tolerance)

  • Payment processing code
  • Authentication/authorization
  • User data handling (PII, PCI, PHI)
  • Core API infrastructure

Dependency policy: Strict approval, security review required, prefer mature packages (>2 years old), regular audits

AI assistant policy: Human review of all AI-suggested dependencies

Tier 2: Important (Balanced Approach)

  • Customer-facing features
  • Integration APIs
  • Business logic

Dependency policy: Standard security review, SCA scanning, approve within 2-3 days

AI assistant policy: IDE warnings for suspicious packages, but trust AI for well-established packages

Tier 3: Lower Risk (Fast Track)

  • Internal tools
  • Marketing pages
  • Development utilities
  • Experimental features

Dependency policy: Lightweight review, auto-approve known packages

AI assistant policy: Trust AI suggestions, post-hoc security scanning

The Customer Impact Question

Here’s what I want to understand: Has anyone actually quantified customer impact from supply chain attacks?

Luis mentioned the M incident from a deprioritized vulnerability. That’s real cost. But:

  • How many supply chain incidents happen per year in our industry?
  • What’s the average cost?
  • What’s the probability we’re affected?

Compare that to:

  • Cost of security processes that slow development 20%
  • Lost revenue from delayed features
  • Competitive disadvantage from slower iteration

I’m not minimizing security risk. I’m asking: Do we have data to make informed trade-offs?

When Supply Chain Security Affects Product Roadmap

Real scenario from last quarter:

Product: “We want to launch collaborative editing (Google Docs-like functionality)”

Engineering: “We’ll use Yjs library for CRDT-based synchronization”

Security: “Yjs has 12 dependencies, including some new packages. We need 3 weeks to review.”

Result: Feature delayed. Competitor shipped similar feature first. We lost a customer because of it.

Was the 3-week security review worth it? What was the actual risk vs the business cost?

Questions I Need Answered

1. False Positive Rate:

Luis mentioned high false positive rates in SCA tools. What percentage of “critical vulnerabilities” are actually exploitable in your specific context?

If 90% are false positives, we’re wasting time on security theater while building risk fatigue.

2. Reachability:

Of the vulnerable dependencies flagged, how many are actually reachable from untrusted input in your application?

Sam mentioned reachability analysis. This seems critical for separating real risk from theoretical risk.

3. Incident Rate:

How many security incidents per year come from vulnerable dependencies vs other attack vectors (phishing, credential theft, misconfigurations)?

If supply chain is 5% of incidents, maybe we’re over-investing in it relative to the risk.

4. Time to Exploit:

When a dependency vulnerability is disclosed, how long until it’s actively exploited?

If it takes 6 months to weaponize, we have time for standard patch cycles. If it’s exploited within days, we need faster response.

The Regulatory Compliance Question

Luis mentioned DORA and NIST SSDF. These are compliance requirements, not risk management.

Controversial take: Compliance requirements are often security theater.

They require documentation and process, but don’t necessarily improve security outcomes. They make auditors happy, but don’t prevent incidents.

I’ve seen teams spend months implementing compliance checklists while ignoring actual high-risk vulnerabilities because “that’s not in the compliance framework.”

How do we satisfy compliance WITHOUT building security processes that don’t actually reduce risk?

What I Want from Security Teams

For product-security collaboration on supply chain issues:

1. Risk Scoring That Includes Business Context:

Don’t just tell me “this dependency has a HIGH severity CVE.”

Tell me: “This vulnerability in your payment flow could allow attackers to bypass authentication and access customer payment data. URGENT.”

vs

“This vulnerability in your marketing site analytics could leak page view data. LOW PRIORITY.”

2. Clear Decision Criteria:

Give me a framework:

  • Dependencies in Tier 1 code: Require security review before approval
  • Dependencies in Tier 2 code: SCA scanning, approve if no high/critical
  • Dependencies in Tier 3 code: Fast-track approval, post-hoc scanning

This lets product teams make risk-informed decisions without security being a bottleneck.

3. Automated Security for 80% of Cases:

Most dependency decisions should be automated:

  • Well-established packages (>2 years old, >1M downloads) → Auto-approve
  • Packages already in use elsewhere in codebase → Auto-approve
  • Packages from trusted publishers → Fast-track review

Security team only reviews edge cases and high-risk scenarios.

4. Quantified Risk Communication:

Instead of: “This is a critical vulnerability”

I want: “This vulnerability has a CVSS score of 8.5, affects 3 customer-facing endpoints processing PII, and has known exploits in the wild. Estimated cost of breach: -5M.”

This helps me make business decisions.

The AI Agent Future

Luis is right to be concerned about autonomous AI agents. But I have a different take:

AI agents might actually improve supply chain security.

Why? Because AI agents can:

  • Automatically update dependencies when vulnerabilities are found
  • Refactor code to remove unnecessary dependencies
  • Analyze reachability and prune unused transitives
  • Monitor for suspicious behavior in real-time

The key is: We need security-aware AI agents, not just productivity-focused ones.

Imagine an AI agent that:

  1. Suggests adding a dependency
  2. Checks internal registry and security policies
  3. Analyzes reachability and business context
  4. Makes risk-informed decision
  5. Documents decision for audit trail

This could be BETTER than human developers who skip due diligence (as Alex admitted).

My Proposed Approach

1. Tiered Security by Feature Risk

Apply different security requirements based on what the code does, not blanket policies.

2. Automated Security for Common Cases

Invest in tooling that auto-approves 80% of dependencies so security team focuses on high-risk 20%.

3. Quantified Risk Communication

Give product leaders data to make informed trade-offs between security and velocity.

4. Security-Aware AI Assistants

Configure AI to understand our security policies and make better suggestions.

5. Continuous Risk Monitoring

Rather than perfect prevention, detect and respond quickly to issues that slip through.

The Question Nobody Wants to Ask

Is perfect supply chain security even possible?

We have hundreds of dependencies. Each updates regularly. New vulnerabilities are discovered constantly. The attack surface is huge.

Maybe the goal isn’t “zero vulnerable dependencies.” Maybe it’s “acceptable risk with fast detection and response.”

How do we shift from trying to prevent all supply chain issues to building resilience when they inevitably occur?