As Director of Engineering at a major financial services company, I’m watching a collision course develop: AI coding assistants are dramatically accelerating how fast developers add dependencies, while our security processes are still calibrated for human-speed development. We’re not ready for this.
The AI Acceleration Problem
Since we rolled out GitHub Copilot and Claude across our engineering teams 9 months ago, I’ve been tracking some concerning metrics:
Dependency Introduction Rate:
- Q4 2024 (pre-AI): Average 8 new dependencies per team per quarter
- Q1 2025 (early AI adoption): Average 15 new dependencies per team per quarter
- Q2 2025 (widespread AI): Average 24 new dependencies per team per quarter
We’re adding dependencies 3x faster than we were a year ago. Our security review processes haven’t scaled to match.
How AI Changes Developer Behavior
Here’s what I’m observing on the ground:
Scenario 1: The Suggested Package
Developer: “I need to parse dates in different formats”
Copilot: Suggests moment.js with full implementation
Developer: Accepts suggestion, adds moment.js to package.json
What didn’t happen:
- Research if we already have a date parsing library
- Check if moment.js is maintained (it’s deprecated!)
- Consider built-in alternatives (Intl.DateTimeFormat)
- Review the security history of the package
The speed of acceptance bypasses normal developer diligence.
Scenario 2: The Phantom Package
This actually happened on my team:
Developer: “Need to validate email addresses”
Claude: Suggests using package with implementation
Developer: Tries to install, package doesn’t exist
Claude: Suggests instead
Developer: Installs, builds, deploys
Security review later: is a typosquatting package. Actual legitimate package is . We deployed malicious code to staging.
The AI suggested a package that looked legitimate but wasn’t. And because the AI suggestion felt authoritative, the developer trusted it without verification.
Scenario 3: The Transitive Explosion
Developer: Adds one innocent-looking AI-suggested package
Reality: That package has 47 transitive dependencies, including 3 with known CVEs and 1 that was published 2 weeks ago by an unknown author.
Developers don’t see the transitive dependency tree. AI certainly doesn’t warn about it.
The Five New Risk Categories
AI-accelerated development introduces specific supply chain risks:
1. Phantom Packages (AI Hallucinations)
AI models sometimes suggest packages that don’t exist or mix up package names. Developers who trust the AI can end up searching for these packages and installing typosquatting alternatives.
Mitigation: Package registry verification in CI/CD. Flag any package published <90 days ago or from new publishers.
2. Malicious Package Injection
Attackers are creating packages with names similar to commonly AI-suggested packages. They know developers using AI are less likely to carefully verify package sources.
Mitigation: Curated internal package registries. Only approved packages can be installed.
3. Outdated Vulnerable Dependencies
AI training data often includes older code examples. Copilot might suggest libraries that were popular in 2020 but are now deprecated with known vulnerabilities (like moment.js).
Mitigation: SCA tools with automatic fix suggestions. Block builds with high/critical CVEs.
4. License Compliance Landmines
AI doesn’t understand your company’s license policies. It might suggest a GPL package when you can only use MIT/Apache.
Mitigation: Automated license scanning in CI/CD. Reject builds with non-compliant licenses.
5. Transitive Dependency Cascade
One AI-suggested package can introduce dozens of transitive dependencies, each with their own security and license implications.
Mitigation: Reachability analysis to understand which dependencies actually get used. Dependency pruning.
What We’re Doing (and It’s Not Enough)
Current mitigations we’ve implemented:
1. Software Composition Analysis (SCA) Tools
Running Snyk and WhiteSource on every PR. Problems:
- High false positive rate (developers ignore alerts)
- Can’t distinguish between “vulnerable package in test code” vs “vulnerable package in production”
- Slow scans delay PR merges
2. Internal Package Registry (Artifactory)
Curated allowlist of approved packages. Problems:
- Slows down development (request, review, approve process)
- Package approval backlog growing faster than we can review
- Developers find workarounds (copying code directly instead of using packages)
3. Automated Dependency Updates (Renovate)
Keeps dependencies current. Problems:
- Update fatigue - 30+ PRs per week across teams
- Breaking changes require manual intervention
- Security updates mixed with feature updates (hard to prioritize)
4. Code Review Requirements
Require review of package.json/requirements.txt changes. Problems:
- Reviewers don’t have context to evaluate package safety
- “Looks fine to me” reviews because nobody wants to be the blocker
- AI-generated code looks professional, so reviewers assume it’s vetted
These mitigations slow us down but don’t make us substantially safer.
The Regulatory Pressure
Making this more urgent: Regulatory requirements are tightening.
EU Digital Operational Resilience Act (DORA) - Effective January 2026:
Requires financial institutions to have comprehensive ICT risk management, including third-party and software supply chain risk.
NIST Secure Software Development Framework (SSDF):
Requires documented supply chain risk management and SBOM (Software Bill of Materials) for all applications.
Our regulators are asking questions like:
- “How do you verify the integrity of third-party dependencies?”
- “What’s your process for responding to supply chain vulnerabilities?”
- “Can you produce an SBOM for your applications?”
These are reasonable questions. Our answers are increasingly inadequate as AI accelerates dependency introduction.
The Gap: Tools Lag Behind AI Coding Speed
The fundamental problem: Security tools were designed for human-speed development. Developers would thoughtfully add dependencies after research and discussion.
Now developers add dependencies at AI suggestion speed - seconds, not hours. Our security tools can’t keep up.
What we need:
- Real-time package risk scoring in the IDE (before commit)
- AI-aware SCA tools that understand AI-suggested packages are higher risk
- Automatic SBOM generation integrated into CI/CD
- Runtime monitoring that detects unexpected dependency behavior
Most of these don’t exist yet, or exist but aren’t mature enough for regulated enterprise use.
The Cultural Challenge
Beyond tools, this requires cultural change:
Old mindset: “Trust developers to make good decisions about dependencies”
New reality: “Developers are accepting AI suggestions without full verification”
Required shift: “Treat AI-suggested dependencies as untrusted input that requires verification”
But how do we do that without killing the productivity gains AI provides? If we make dependency approval so onerous that it takes 2 weeks, developers will just copy-paste code instead (even worse for security).
The AI Agent Future
This is about to get worse. Current AI assistants make suggestions that developers accept or reject.
Future AI agents will autonomously:
- Add dependencies to solve problems
- Update dependencies to fix vulnerabilities
- Refactor code that introduces new dependencies
When AI agents operate autonomously in CI/CD pipelines, the dependency introduction rate will 10x, not 3x. We’re absolutely not ready for that.
Questions for This Community
For security professionals: How are you adapting supply chain security for AI-accelerated development? What tools or processes are working?
For developers: Are you more or less careful about dependencies when accepting AI suggestions? Do you even notice when AI adds packages?
For engineering leaders: How do you balance AI productivity gains with supply chain security risk? Where’s the acceptable risk/reward trade-off?
For anyone: Are there emerging tools or practices that address AI-era supply chain security? What should we be evaluating?
What I’m Most Worried About
We’re sitting on a supply chain time bomb. Every AI-suggested package is a potential entry point for:
- Malicious code injection
- Data exfiltration
- Cryptomining
- Ransomware
- Supply chain attacks on our downstream customers
And because AI makes adding dependencies so frictionless, we’re accumulating risk faster than we can assess it.
Financial services is a high-value target. If attackers figure out how to systematically compromise AI-suggested packages, we’re in serious trouble.
How do we secure AI-accelerated development without killing the productivity gains that make AI valuable?