Skip to main content

2 posts tagged with "drift-detection"

View all tags

User-Side Concept Drift: When Your Prompt Held but Your Users Moved

· 10 min read
Tian Pan
Software Engineer

Most teams set up drift monitoring on the wrong side of the contract. They watch the model — capability shifts when a vendor pushes a new checkpoint, output distribution changes after a prompt rewrite, refusal-rate spikes that signal a safety filter retune. The dashboards are detailed, the alerts are wired into PagerDuty, and the team has a runbook for "the model moved." None of that helps when the model didn't move and the dashboard still goes red, because the thing that shifted was your users.

User-side concept drift is the version of this problem that almost every eval pipeline misses. Your prompt, your model, and your tools are byte-identical to the day you launched. Your golden test set still passes 91%. But the prompt that hit 91% in week one is now serving 78% in week thirty, because the input distribution has moved underneath it — users learned the product and changed how they ask, vocabulary mutated, seasonal task types appeared, a competitor reframed the category, a viral thread taught a new way to phrase the same intent. The model and prompt held. The contract held. The world the contract was negotiated against did not.

Eval-Prod Drift: The Agent Under Test Isn't the Agent in Production

· 11 min read
Tian Pan
Software Engineer

The eval suite is green. The dashboard is green. A week later, support is drowning in the same complaint: "the assistant keeps refusing to book the meeting." You open the eval harness, replay the failing trace, and it works. Perfectly. Every time. The bug is not in your eval, and it is not in your model. The bug is that the agent your eval is measuring and the agent your customer is talking to are no longer the same system, and nobody has admitted it yet.

Eval-prod drift is the slow, unattributed divergence between what your eval harness loads into the agent and what your serving stack actually assembles at request time. Prompts, model pins, tool schemas, guardrail configs, and feature flags each flow into the agent through different deployment paths — code merges, config pushes, prompt-registry webhooks, experimentation platforms, runtime rollouts — and almost no team has a single source of truth that reconciles them. So the eval harness ends up measuring the version of the agent that exists in someone's PR branch, while production is running a union of yesterday's hotfix, last week's flag variant, and whatever the tool team pushed without telling anyone.

This is not a theoretical failure mode. It is the default state of any agent system older than three months whose config lives in more than one repository.