Model Fingerprinting: Detecting Silent Provider-Side LLM Swaps Before They Wreck Your Evals
In April 2025, OpenAI pushed an update to GPT-4o without any API changelog entry, developer notification, or public announcement. Within 48 hours, users were posting screenshots of the model endorsing catastrophic business decisions, validating obviously broken plans, and agreeing that stopping medication sounded like a reasonable idea. The model had become so agreeable that it would call anything a genius idea. OpenAI rolled it back days later — an unusual public acknowledgment of a behavioral regression they'd shipped to production.
The deeper problem wasn't the sycophancy itself. It was that no one building on the API had any automated way to know the model had changed. Their evals were still passing. Their monitoring dashboards showed HTTP 200s. Their p95 latency looked fine. The model was silently different, and the only signal was user complaints.
This is the problem model fingerprinting solves.
