Skip to main content

2 posts tagged with "model-upgrades"

View all tags

Few-Shot Rot: Why Yesterday's Examples Hurt Today's Model

· 10 min read
Tian Pan
Software Engineer

A team I worked with had a JSON-extraction prompt with eleven hand-tuned few-shot examples. On the previous model, those examples lifted exact-match accuracy by six points. After the model upgrade, the same eleven examples dragged accuracy down by two. Nobody changed the prompt. Nobody changed the eval set. The examples simply stopped working — and worse, started actively misdirecting.

That regression is not a bug in the new model. It is a rot pattern in the prompt itself, and it shows up every time a team migrates between model versions while treating the prompt as a fixed asset. Few-shot examples are not part of the prompt. They are part of the model-prompt pair. Migrating one without re-evaluating the other produces a regression that no eval suite tied to a single model version will catch.

Tokenizer Churn: The Silent Breaking Change Inside Your 'Compatible' Model Upgrade

· 11 min read
Tian Pan
Software Engineer

The vendor said the upgrade was a drop-in replacement. The API contract held. The model name in your config barely changed. A week later, your context-window guard starts triggering on prompts it never tripped on before, your stop-sequence regex matches in the wrong place, and one of your few-shot examples started producing a confidently wrong answer that your eval suite happens not to cover. Nobody touched the prompt. Nobody touched the temperature. Somebody quietly retrained the tokenizer.

Tokenizer changes are the breaking change vendors don't call breaking. The API surface is byte-stable, the SDK didn't bump a major version, and the release notes mention "improved instruction following" — but the function from your input string to the integer sequence the model actually sees has been replaced. Every assumption your code made about how text becomes tokens is now subtly wrong. The cost of that invisibility is two weeks of "the model feels different" before someone re-runs a canonical prompt through count_tokens and finds the answer.