Skip to main content

5 posts tagged with "migration"

View all tags

Agent Memory Schema Evolution Is Protobuf on Hard Mode

· 11 min read
Tian Pan
Software Engineer

The first painful agent-memory migration always teaches the same lesson: there were two schemas, and you only migrated one of them. The storage layer is fine — every row was rewritten, every key is in its new shape, the backfill job logged success. The agent is broken anyway. It keeps writing to user.preferences.theme, retrieves nothing, then helpfully synthesizes a default from context as if the key never existed. The migration runbook reports green. Users report stale memory.

The asymmetry is structural. A traditional service that depends on a renamed column gets a hard error and you fix it. An agent that depends on a renamed memory key gets a soft miss and confabulates around it. The schema lives in two places — your store and the model's context — and you can only migrate one of them with a SQL script.

Protobuf solved a version of this problem twenty years ago by codifying an additive-only discipline: fields are forever, numbers are forever, wire types never change, and removal is replaced with deprecation. That discipline is the right starting point for agent memory, with one extra constraint that makes it harder. Protobuf receivers ignore unknown fields by design. Agents don't.

Model Migration as Database Migration: Safely Switching LLM Providers Without Breaking Production

· 10 min read
Tian Pan
Software Engineer

When your team decides to upgrade from Claude 3.5 Sonnet to Claude 3.7, or migrate from OpenAI to a self-hosted Llama deployment, the instinct is to treat it like a library upgrade: change the API key, update the model name string, run a quick sanity check, and ship. This instinct is wrong, and the teams that follow it discover why at 2 AM in week two when a customer support agent starts producing responses in a completely different format — technically valid, semantically disastrous.

Switching LLM providers or model versions is structurally identical to a database schema migration. Both involve changing the behavior of a system that the rest of your application has implicit contracts with. Both can look fine on day one and fail catastrophically on day ten. Both require dual-running, canary deployment, rollback criteria, and a migration playbook — not a config change followed by a Slack message.

The Model Portability Tax: How to Architect AI Systems You Can Actually Migrate

· 9 min read
Tian Pan
Software Engineer

You inherited an AI feature built on GPT-4-turbo. The model is being deprecated. Your manager wants to cut costs by switching to a newer, cheaper model. You run a quick test, metrics look passable, you ship it — and a week later, accuracy on your core use case drops 22%. Support tickets climb. You're now in a crisis migration rather than a planned one.

This is the model portability tax: the hidden engineering cost that accumulates every time you couple your application logic tightly to a specific foundation model. Every team pays it. Most don't realize how large the bill has gotten until the invoice arrives.

Model Deprecation Readiness: Auditing Your Behavioral Dependency Before the 90-Day Countdown

· 8 min read
Tian Pan
Software Engineer

When Anthropic deprecated a Claude model last year, a company noticed — but only because a downstream parser started throwing errors in production. The culprit? The new model occasionally wrapped its JSON responses in markdown code blocks. The old model never did. Nobody had documented that assumption. Nobody had tested for it. The fix took an afternoon; the diagnosis took three days.

That pattern — silent behavioral dependency breaking loudly in production — is the defining failure mode of model migrations. You update a model ID, run a quick sanity check, and ship. Six weeks later, something subtle is wrong. Your JSON parsing is 0.6% more likely to fail. Your refusal rate on edge cases doubled. Your structured extraction misses a field it used to reliably populate. The diff isn't in the code — it's in the model's behavior, and you never wrote a contract for it.

With major providers now running on 60–180 day deprecation windows, and the pace of model releases accelerating, this is no longer a theoretical concern. It's a recurring operational challenge. Here's how to get ahead of it.

Provider Lock-In Anatomy: The Seven Coupling Points That Make Switching LLM Providers a 6-Month Project

· 10 min read
Tian Pan
Software Engineer

Every team that ships an LLM-powered feature eventually has the same conversation: "What if we need to switch providers?" The standard answer — "we'll just swap the API key" — reveals a dangerous misunderstanding of where coupling actually lives. In practice, teams that attempt a provider migration discover that the API endpoint is the least of their problems. The real lock-in hides in seven distinct coupling points, each capable of turning a "quick swap" into a quarter-long project.

Migration expenses routinely consume 20–50% of original development time. Enterprise teams who treat model switching as plug-and-play grapple with broken outputs, ballooning token costs, and shifts in reasoning quality that take weeks to diagnose. Understanding where these coupling points are — before you need to migrate — is the difference between a controlled transition and an emergency scramble.