Skip to main content

The MCP Server Graveyard: When Your Agent's Dependencies Stop Shipping

· 10 min read
Tian Pan
Software Engineer

The last commit to the MCP server your agent calls every five minutes was eight months ago. The upstream API it wraps rolled out a new authentication model in February. There are 47 open issues, 12 of them flagged security. The maintainer's GitHub account hasn't shown activity since October. Your agent still connects, still receives tool descriptions, still executes calls — and silently, every one of those calls flows through a piece of infrastructure that nobody is watching.

This is the shape of MCP abandonment. Not a malicious rug pull, not a compromised package, just neglect. Somebody published a useful server in 2025, got adopted, then moved on. The server kept working because nothing forced it to break. Until it does — and by then, the trust boundary your agent was crossing every five minutes has already failed.

Most teams adopted community MCP servers the way they adopted npm packages: by running install and reading the README. That mental model makes sense for libraries that sit in your dependency tree, get audited at build time, and surface their deprecations through your package manager. It does not survive contact with MCP, where the dependency is a live trust boundary that the LLM invokes in a loop, with credentials, on production data.

Abandonment Is the Quiet Version of Supply Chain Risk

An active attack leaves traces. A malicious commit shows up in the diff. A credential exfiltration attempt hits your egress monitoring. A poisoned tool description sometimes shows up in prompt-injection detectors. There are signals, even if you're slow to read them.

Abandonment has no signal. The server still responds on its socket. Tool descriptions still render in the agent's context window. Every call still returns a 200 and a JSON body that looks right. The failure mode is not "the thing stopped working." It is "the thing silently stopped keeping up with the world it talks to."

Meanwhile, the world moves. The upstream API gets new required fields. A CVE in one of the server's transitive dependencies goes public. The protocol itself ships a new capability version. Security best practices for the category evolve. None of this trips an alert in your system, because none of it changes the shape of the socket between your agent and the orphaned process.

Recent industry research puts the average time-to-exploit for a disclosed CVE at five days. An MCP server that has been silent for sixty days has, by definition, missed every security update in that window — on its own code, on its runtime, and on the transitive chain beneath it. You are not getting a warning. You are running a known-vulnerable trust boundary.

The Base Rate Is Much Worse Than You Think

The npm ecosystem is the closest analogue to where MCP is heading, and the numbers are sobering. Academic research across the registry finds that 8.2% of the most downloaded packages are officially deprecated, and that expanding the definition to include archived repos and packages with no GitHub presence pushes the rate to roughly 15%. About 61% of packages on npm have had no release in the last twelve months. Packages untouched for over a thousand days are, in practice, unmaintained — and when their vulnerabilities surface, no patches will come.

Users download more than two billion deprecated packages from npm every week. The existence of a download number, a popularity score, or even a prominent listing in a registry tells you almost nothing about whether the code is being actively maintained.

The MCP ecosystem inherits this dynamic and compresses the timeline. By March 2026, the combined Python and TypeScript MCP SDK downloads had crossed ninety-seven million per month. Public registries index somewhere between twelve thousand and twenty thousand servers, though directories openly note that "many are forks, variants, or abandoned." Eighteen months of explosive growth sits on top of a protocol whose maturity curve has not yet bent. Most community servers are weekend projects by individual developers, shipped in a rush to claim a namespace. The statistical distribution of MCP maintainer attention is almost certainly worse than npm's, not better.

The naming is telling. A protocol this young, with a community this hungry, produces a graveyard whose headcount grows faster than its gardeners'.

The Failure Modes That Are Specific to MCP

Three things make MCP abandonment worse than a stale npm package.

Upstream API drift. Most community MCP servers are shims around someone else's API — Figma, Postmark, Linear, a database driver, a cloud console. The MCP server translates tool calls into API calls. When the wrapped API changes shape — new required fields, deprecated endpoints, altered response schemas, tightened authentication — an abandoned shim rots in place. Sometimes it returns plausible-looking errors. Sometimes it succeeds with subtly wrong semantics, because the LLM is forgiving about response formats and will confidently report success back to the user. The agent believes it sent the email, applied the migration, or closed the ticket. Nobody is watching the contract between the shim and the thing the shim wraps.

The tool description contract freeze. The text of the MCP tool description is what the LLM reads when it decides whether and how to call the tool. In an abandoned server, that description gets out of sync with what the underlying API actually accepts. Parameter names drift. Rate limits change. A field that was optional becomes required. The description still promises the old behavior. The model calls it that way. The call fails, or worse, produces hallucinated-looking output because the model patched over the discrepancy with a plausible guess. Nobody is updating the description because nobody is updating anything.

Inherited CVE surface. An MCP server is not one dependency, it is a tree of them. The MCP runtime, the language SDK, the HTTP client, the auth library, the serialization layer, and every transitive dependency beneath them all ship their own security updates. An abandoned server is frozen at the versions it shipped with. Each month of silence extends the gap between its dependency graph and the one a responsible maintainer would have pulled forward. You are running someone's 2025 snapshot of a tree that received at least a dozen security-relevant updates in 2026 alone.

Each of these failures is silent by default. Each is amplified by the fact that your agent is calling the server on every relevant turn, with real credentials, against real systems.

The Dependency Audit Framing That Actually Works

Teams that have survived npm's long tail already know the pattern. The question is not "is this dependency open source?" The question is "who owns the failure when this stops working?" For MCP, a small number of signals carry most of the information.

  • Last commit date. Bucket it into weeks, months, years. Weeks is healthy. Months is yellow in a protocol evolving this fast. Years is a gravestone.
  • Release cadence. Versioned releases and a changelog beat main-only development. If there are no tags, there is no concept of what you are depending on.
  • Maintainer identity. A single GitHub handle behind a hobby repo is a different risk profile from a vendor's first-party server, an enterprise-backed project, or a named working group. Ask whether the repo owner's incentive to keep shipping is structural (it's their job) or optional (it was a weekend).
  • Issue queue behavior. Raw open count matters less than response latency to security reports and the presence of triage labels. A quiet repo with 300 stale issues is not "stable" — it's dormant.
  • CI discipline. Does the server have tests? Do they pass? Is there anything enforcing that a commit to main works against the upstream API it wraps? A server whose CI has been red for six months is already half-abandoned.
  • Upstream contract. Is the wrapped API stable and versioned? Is there a changelog? If the thing the shim wraps moves quickly and the shim does not, the shim has a half-life.

Apply these signals before adoption, not after an incident. They do not catch every problem — they are blunt heuristics — but they are enough to sort a candidate server into "trust and monitor" versus "do not put on the critical path."

The Fork-and-Vendor Decision Tree

Once a server matters to your product, "wait and see" stops being a strategy. The options reduce to a short decision tree.

If the server is critical path and the upstream API is stable but the maintainer is unresponsive, fork and vendor it. Bring the code into your repo, take ownership of the dependency graph, and accept that you now own the CVE patch queue. This is the cheapest option when the hard part is done and only maintenance remains. It also leaves the community copy in place for lower-stakes users.

If the server is critical path and the upstream API is unstable, do not fork — build a thin in-house wrapper against the API directly. You were going to be maintaining the shim's contract with the upstream anyway; skip the layer of someone else's abstractions and write the hundred-to-three-hundred lines of code yourself. The in-house wrapper gives you explicit tool descriptions that you authored (no injection surface from unknown strings), explicit version pinning, an explicit dependency audit, and a team on the other end of an incident. The cost is engineering time. The benefit is that nothing about the server is orphaned-by-default.

If the server is non-critical and a vendor-first-party alternative exists, swap to the alternative. The vendor has structural reasons to keep shipping; the community has optional ones.

If nothing else applies, write your own and accept the ownership tax upfront. The tax is cheaper than every subsequent incident that traces back to a piece of code nobody on your team has read in a year.

"But It's Open Source" Is Not a Substitute for Ownership

Open source licenses permit you to fork. They do not guarantee anyone ever will. "The code is available" is a license claim, not a maintenance claim, and the two get conflated every time someone adopts a community MCP server on the strength of a star count.

Every agent that calls an orphaned server is importing the orphan's future failure mode into its own product surface. When the upstream rotates credentials, when the CVE drops, when the breaking change lands, your agent is the one that fails on a customer call. The runbook entry reads "contact the maintainer," and the maintainer has been silent since last October.

The ecosystem will bifurcate. Vendors whose APIs matter — GitHub, Figma, Stripe, Linear, the cloud consoles — will ship first-party MCP servers as product surface area, because the alternative is watching a broken community shim represent their brand. Community servers will remain valuable for exploration and prototyping. The split is already visible if you look at which servers ship with changelogs and which ship with a README.md that was last edited at adoption time.

For anything that sits on your critical path, assume the community server will be effectively abandoned within twenty-four months and plan for that possibility now. Vendor it, wrap it, replace it, or budget explicitly for inheriting its maintenance.

The question to ask before every MCP adoption, in one line: if this repo went silent tomorrow, what breaks, and who on my team owns the fix? If the honest answer is "nothing, because we'll notice" or "nobody, it's open source," you have not made a dependency decision. You have made a bet that silence is cheap.

It isn't, and the graveyard keeps growing.

References:Let's stay in touch and Follow me for more thoughts and updates