Skip to main content

2 posts tagged with "vendor-lock-in"

View all tags

LLM Provider Lock-in: The Portability Patterns That Actually Work

· 8 min read
Tian Pan
Software Engineer

Everyone talks about avoiding LLM vendor lock-in. The advice usually boils down to "use an abstraction layer" — as if swapping openai.chat.completions.create for litellm.completion solves the problem. It doesn't. The API call is the easy part. The real lock-in is invisible: it lives in your prompts, your evaluation data, your tool-calling assumptions, and the behavioral quirks you've unconsciously designed around.

Provider portability isn't a boolean. It's a spectrum, and most teams are further from the portable end than they think. The good news is that the patterns for genuine portability are well understood — they just require more discipline than dropping in a wrapper library.

Provider Lock-In Anatomy: The Seven Coupling Points That Make Switching LLM Providers a 6-Month Project

· 10 min read
Tian Pan
Software Engineer

Every team that ships an LLM-powered feature eventually has the same conversation: "What if we need to switch providers?" The standard answer — "we'll just swap the API key" — reveals a dangerous misunderstanding of where coupling actually lives. In practice, teams that attempt a provider migration discover that the API endpoint is the least of their problems. The real lock-in hides in seven distinct coupling points, each capable of turning a "quick swap" into a quarter-long project.

Migration expenses routinely consume 20–50% of original development time. Enterprise teams who treat model switching as plug-and-play grapple with broken outputs, ballooning token costs, and shifts in reasoning quality that take weeks to diagnose. Understanding where these coupling points are — before you need to migrate — is the difference between a controlled transition and an emergency scramble.