Skip to main content

107 posts tagged with "evaluation"

View all tags

The Evaluation Paradox: How Goodhart's Law Breaks AI Benchmarks

· 10 min read
Tian Pan
Software Engineer

In late 2024, OpenAI's o3 system scored 75.7% on the ARC-AGI benchmark — a test specifically designed to resist optimization. The AI research community celebrated. Then practitioners looked closer: o3 had been trained on 75% of the benchmark's public training set, and the highest-compute configuration used 172 times more resources than the baseline. It wasn't a capability breakthrough dressed up as a score. It was a score dressed up as a capability breakthrough.

This is the evaluation paradox. The moment a benchmark becomes the thing teams optimize for, it stops measuring what it was designed to measure. Goodhart's Law — "when a measure becomes a target, it ceases to be a good measure" — was articulated in 1970s economic policy, but it describes AI benchmarking with eerie precision.

Why Hallucination Rate Is the Wrong Primary Metric for Production LLM Systems

· 8 min read
Tian Pan
Software Engineer

Your LLM's hallucination rate is 3%. Your users hate it anyway. This isn't a contradiction — it's a symptom of measuring the wrong thing.

Hallucination rate has become the default headline metric for LLM quality because it's easy to explain to stakeholders and straightforward to compute on a benchmark. But in production, it correlates poorly with what users actually care about: did the task get done, was the result trustworthy enough to act on, and did the system save them time?

The Long-Tail Coverage Problem: Why Your AI System Fails Where It Matters Most

· 10 min read
Tian Pan
Software Engineer

A medical AI deployed to a hospital achieves 97% accuracy in testing. It passes every internal review, gets shipped, and then quietly fails to detect parasitic infections when parasite density drops below 1% of cells — the exact scenario where early intervention matters most. Nobody notices until a physician flags an unusual miss rate on a specific patient population.

This is the long-tail coverage problem. Your aggregate metrics look fine. Your system is broken for the inputs that matter.

The Mental Model Shift That Separates Good AI Engineers from the Rest

· 10 min read
Tian Pan
Software Engineer

The most common pattern among engineers who struggle with AI work isn't a lack of technical knowledge. It's that they keep asking the wrong question. They want to know: "Does this work?" What they should be asking is: "At what rate does this fail, and is that rate acceptable for this use case?"

That single shift — from binary correctness to acceptable failure rates — is the core of what experienced AI engineers think differently about. It sounds simple. It isn't. Everything downstream of it is different: how you debug, how you test, how you deploy, what you monitor, what you build your confidence on. Engineers who haven't made this shift will keep fighting their tools and losing.

Prompt Localization Debt: The Silent Quality Tiers Hiding in Your Multilingual AI Product

· 9 min read
Tian Pan
Software Engineer

Your AI feature shipped with a 91% task success rate. You ran evals, iterated on your prompt, and tuned it until it hit your quality bar. Then you launched globally — and three months later a user in Tokyo files a support ticket that your AI "doesn't really understand" their input. Your Japanese users have been silently working around a feature that performs 15–20 percentage points worse than what your English users experience. Nobody on your team noticed because nobody was measuring it.

This is prompt localization debt: the accumulating gap between how well your AI performs in the language you built it for and every other language your users speak. It doesn't announce itself in dashboards. It doesn't cause outages. It just quietly creates second-class users.

Shadow Traffic for AI Systems: The Safest Way to Validate Model Changes Before They Ship

· 10 min read
Tian Pan
Software Engineer

Most teams ship LLM changes the way they shipped web changes in 2005 — they run some offline evals, convince themselves the numbers look fine, and push. The surprise comes on Monday morning when a system prompt tweak that passed every benchmark silently breaks the 40% of user queries that weren't in the eval set.

Shadow traffic is the fix. The idea is simple: run your candidate model or prompt in parallel with production, feed it every real request, compare the outputs, and only expose users to the current version. Zero user exposure, real production data, and statistical confidence before anyone sees the change. But applying this to LLMs requires rethinking almost every piece of the implementation — because language models are non-deterministic, expensive to evaluate, and produce outputs that can't be compared with a simple diff.

How to Pick the Right LLM Before You Write a Single Prompt

· 10 min read
Tian Pan
Software Engineer

Most teams pick an LLM the same way they picked a database ten years ago: they look at a comparison table, pick the one with the highest score in the column they care about, and start building. Six months later, they're either migrating or wondering why their eval results look nothing like what users experience. The benchmark was right. The model was wrong for them.

The mistake isn't picking the wrong model — it's picking a model before you know what your actual production task distribution looks like. A benchmark tests what someone else decided matters. Your production system has a completely different distribution. These two things are not the same.

Writing Acceptance Criteria for Non-Deterministic AI Features

· 12 min read
Tian Pan
Software Engineer

Your engineering team has been building a document summarizer for three months. The spec says: "The summarizer should return accurate summaries." You ship it. Users complain the summaries are wrong half the time. A postmortem reveals no one could define what "accurate" meant in a way that was testable before launch.

This is the standard arc for AI feature development, and it happens because teams apply acceptance criteria patterns built for deterministic software to systems that are fundamentally probabilistic. An LLM-powered summarizer doesn't have a single "correct" output — it has a distribution of outputs, some acceptable and some not. Binary pass/fail specs don't map onto distributions.

The problem isn't just philosophical. It causes real pain: features launch with vague quality bars, regressions go undetected until users notice, and product and engineering can't agree on whether a feature is "done" because nobody specified what "done" means for a stochastic system. This post walks through the patterns that actually work.

What 'Done' Means for AI-Powered Features: Engineering the Perpetual Beta

· 10 min read
Tian Pan
Software Engineer

Shipping a feature in traditional software ends with a merge. The unit tests pass. The integration tests pass. QA signs off. You flip the flag, and unless a bug surfaces in production, you move on. The feature is done. For AI-powered features, that moment doesn't exist — and if you're pretending it does, you're accumulating a stability debt that will eventually show up as a user trust problem.

The reason is straightforward but rarely designed around: deterministic software produces the same output from the same input every time. AI features do not. Not because of a bug, but because the behavior is defined by a model that lives outside your codebase, trained on data that reflects a world that keeps changing, consumed by users whose expectations evolve as they see what's possible.

Annotator Bias in Eval Ground Truth: When Your Labels Are Systematically Steering You Wrong

· 10 min read
Tian Pan
Software Engineer

A team spent six months training a sentiment classifier. Accuracy on the holdout set looked solid. They shipped it. Three months later, an audit revealed the model consistently rated product complaints from non-English-native speakers as more negative than identical complaints from native speakers — even when the text said the same thing. The root cause wasn't the model architecture. It wasn't the training procedure. It was the annotation team: twelve native English speakers in one timezone, none of whom noticed that certain phrasings carried different emotional weight in translated text.

The model had learned the annotators' blind spots, not the actual signal.

This is annotator bias in practice. It doesn't announce itself. It shows up as an eval score you trust, a benchmark rank that looks reasonable, a deployed system that behaves strangely on subgroups you didn't test carefully enough. Ground truth corruption is upstream of everything else in your ML pipeline — and it's the problem most teams discover too late.

Eval Coverage as a Production Metric: Is Your Test Suite Actually Testing What Users Do?

· 9 min read
Tian Pan
Software Engineer

Most AI teams treat a passing eval suite as a signal that their system is working. It isn't—not by itself. A suite that reliably scores 87% is doing exactly one thing: telling you the system performs well on the 87% of cases your suite happens to cover. If that suite was hand-curated six months ago, built from the examples the team thought of, and never updated against live traffic, it's measuring the wrong thing with increasing confidence.

This is the eval coverage problem. It's not about whether your evaluators are accurate—it's about whether the distribution of queries in your test set matches the distribution of queries your users are actually sending. When those two distributions diverge, you get a result that's far worse than a failing eval: a passing eval sitting on top of a silently degrading product.

The Jagged Frontier: Why AI Fails at Easy Things and What It Means for Your Product

· 10 min read
Tian Pan
Software Engineer

A common assumption in AI product development goes something like this: if a model can handle a hard task, it can definitely handle an easier one nearby. This assumption is wrong, and it's responsible for a category of production failures that no amount of benchmark reading prepares you for.

The research term for the underlying phenomenon is the "jagged frontier" — AI's capability boundary isn't a smooth line that hard tasks sit outside of and easy tasks sit inside. It's a ragged, unpredictable shape. AI systems can write production-grade database query optimizers and still miscalculate whether two line segments on a diagram intersect. They can pass PhD-level science exams and fail children's riddle questions that involve spatial relationships. They can synthesize 50-page documents and then confidently hallucinate a summary of a paragraph they just read.