I had a moment last week that genuinely unsettled me.
I was building a new accessibility audit component for a side project. Asked my AI coding assistant to help wire up some SVG parsing logic. It confidently suggested I install a package called svg-accessibility-parser. Looked legit—reasonable name, plausible API. I was halfway through npm install when something felt off. I checked npm. The package does not exist.
My AI hallucinated a dependency name. And I almost installed it without thinking.
This Has a Name Now: Slopsquatting
Turns out researchers have been studying this exact pattern. It is called slopsquatting—a cousin of typosquatting, but instead of betting on human typos, attackers bet on machine hallucinations.
Here is how it works:
- LLMs predict statistically likely next tokens, so when you ask for help, they sometimes suggest package names that sound right but do not actually exist
- A USENIX study found that roughly 20% of AI-generated code samples reference non-existent packages, and 43% of hallucinated names are reproduced consistently—meaning they are predictable
- Attackers study which fake names appear frequently, register them on npm/PyPI, and wait
This is not theoretical. A security researcher at Lasso Security documented that AI models repeatedly hallucinated a Python package called huggingface-cli. He registered it as an empty package on PyPI. 30,000+ authentic downloads in three months. Another hallucinated package—react-codeshift—spread across 237 GitHub repositories through AI-generated agent skills without a human ever reviewing the install command.
The LiteLLM Attack Made It Real
If slopsquatting is the slow-burn threat, the LiteLLM supply chain attack from March 2026 was the five-alarm fire.
Quick recap for anyone who missed it:
- LiteLLM is an AI proxy library downloaded roughly 3.4 million times per day
- Threat actors (TeamPCP) compromised LiteLLM’s CI/CD pipeline by poisoning a Trivy GitHub Action used in their security scanning workflow
- They exfiltrated the PyPI publish token from the GitHub Actions runner
- Published two backdoored versions (
1.82.7and1.82.8) that were live for about 40 minutes - The malicious payload (a
.pthfile that executes on every Python process startup) could exfiltrate SSL/SSH keys, cloud credentials, Kubernetes configs, crypto wallets, API keys—basically everything - Over 40,000 downloads of the compromised version before PyPI quarantined it
The irony? They compromised the security scanner to compromise the package. And LiteLLM is present in 36% of cloud environments.
AI Agents Make This Worse
Here is what really worries me. Research analyzing 117,000+ dependency changes across thousands of GitHub repos found that AI agents choose versions with known CVEs 50% more often than humans. And the vulnerable versions they pick tend to require larger, more disruptive upgrades to fix.
Now combine that with autonomous coding agents that install dependencies, run builds, and open PRs without human involvement. You have got software that:
- Hallucinates package names 20% of the time
- Picks vulnerable versions when the package does exist
- Operates with enough permissions to execute arbitrary code
And we are handing it commit access.
What I Changed After My Scare
I am not going to pretend I have this figured out. But after my svg-accessibility-parser moment, I made a few changes:
- I verify every AI-suggested dependency manually. Yes, every single one. I open the npm/PyPI page, check the repo, check the download count, check the last publish date
- I added a lockfile diff review step to our team’s PR process—any new dependency addition gets flagged
- I pinned all our GitHub Actions to specific commit SHAs instead of tags (the LiteLLM attack exploited unpinned Trivy)
- I run
npm audit/pip auditas a blocking CI step, not just an informational one
But honestly? I still feel like we are applying band-aids to a structural problem. The tools we use to write code are now introducing attack surface that our existing security processes were not designed for.
Questions I Am Sitting With
- How are your teams handling AI-suggested dependencies? Is anyone doing systematic verification?
- Should package registries (npm, PyPI) build hallucination-aware defenses? Like flagging recently-registered packages that match known hallucination patterns?
- Are we going to need a fundamentally different dependency governance model for AI-assisted development?
- For those using autonomous coding agents (Cursor, Devin, etc.)—what guardrails do you have on dependency installation?
I keep thinking about how my design systems work depends on dozens of npm packages. If even one of them gets compromised through a supply chain attack or a slopsquatting registration… the blast radius is not just my side project. It is every product team consuming our component library.
Would love to hear how others are thinking about this. Especially from folks managing larger engineering orgs—is this on your risk radar yet?