Skip to main content

2 posts tagged with "portability"

View all tags

LLM Vendor Lock-In Is a Spectrum, Not a Binary

· 10 min read
Tian Pan
Software Engineer

A team builds a production feature on GPT-4. Months later, they decide to evaluate Claude for cost reasons. They spend two weeks "migrating"—but the core API swap takes an afternoon. The remaining ten days go toward fixing broken system prompts, re-testing refusal edge cases, debugging JSON parsers that choke on unexpected prose, and re-tuning tool-calling schemas that behave differently across providers. Migration estimates that assumed a simple connector swap balloon into a multi-layer rebuild.

This is the LLM vendor lock-in problem in practice. And the teams that get burned aren't the ones who chose the wrong provider—they're the ones who didn't recognize that lock-in exists on multiple axes, each with a different risk profile.

LLM Provider Lock-in: The Portability Patterns That Actually Work

· 8 min read
Tian Pan
Software Engineer

Everyone talks about avoiding LLM vendor lock-in. The advice usually boils down to "use an abstraction layer" — as if swapping openai.chat.completions.create for litellm.completion solves the problem. It doesn't. The API call is the easy part. The real lock-in is invisible: it lives in your prompts, your evaluation data, your tool-calling assumptions, and the behavioral quirks you've unconsciously designed around.

Provider portability isn't a boolean. It's a spectrum, and most teams are further from the portable end than they think. The good news is that the patterns for genuine portability are well understood — they just require more discipline than dropping in a wrapper library.