Skip to main content

11 posts tagged with "llm-evaluation"

View all tags

The Acknowledgment-Action Gap: Your Agent's 'Got It' Is Not a Commitment

· 10 min read
Tian Pan
Software Engineer

An agent tells a customer: "Got it — I've submitted your refund request. You should see it in 5–7 business days." The customer closes the chat. No refund was ever submitted. There is no ticket, no API call, no row in the refunds table. Just a paragraph of polite, confident English, followed by a successful session termination.

This is the acknowledgment-action gap, and it is the single most expensive class of bug in production agent systems. The gap exists because the fluent prose that makes instruction-tuned models feel competent is a different output channel than the structured tool calls that actually change the world — and most teams wire their business logic to the wrong one.

Everyone who ships an agent eventually learns this the hard way. The model produces a polished confirmation that reads like a commitment, the downstream system interprets it as a commitment, and weeks later a support ticket arrives asking where the refund went. The embarrassing part is not that the model lied. The embarrassing part is that the system was designed to trust what it said.

The benchmark leak: how your eval set quietly joins the training corpus

· 11 min read
Tian Pan
Software Engineer

The benchmark you trust most is the one most likely lying to you. Public evals are a closed loop: you publish the test, someone scrapes it, the next generation of models trains on the scrape, and the score on your trusted yardstick rises by ten points without anyone touching the underlying capability. The measurement apparatus stays still while the thing it measures shifts under it, and the gap between "the model is better at this benchmark" and "the model is better at this task" widens every quarter. By the time the divergence is loud enough to argue about, the eval has already shipped six leaderboard updates and three product roadmaps that all assumed the number meant something.

This is not a hypothetical failure mode. The non-public pre-RLHF GPT-4 base model has been shown to reproduce the BIG-Bench canary GUID verbatim, and Claude 3.5 Sonnet has done the same, both indicating that supposedly-quarantined task data ended up in training. Roughly 40% of HumanEval examples have been identified as contaminated, and removing the contaminated subset from GSM8K drops measured accuracy by about 13 points. SWE-bench Verified now shows a documented 10.6% data leakage rate, and OpenAI publicly stopped reporting it in late 2025 after their internal audit found every major frontier model could reproduce verbatim gold patches for some tasks. The numbers we use to compare models are increasingly numbers about memorization, not capability.

Eval-Prod Drift: The Agent Under Test Isn't the Agent in Production

· 11 min read
Tian Pan
Software Engineer

The eval suite is green. The dashboard is green. A week later, support is drowning in the same complaint: "the assistant keeps refusing to book the meeting." You open the eval harness, replay the failing trace, and it works. Perfectly. Every time. The bug is not in your eval, and it is not in your model. The bug is that the agent your eval is measuring and the agent your customer is talking to are no longer the same system, and nobody has admitted it yet.

Eval-prod drift is the slow, unattributed divergence between what your eval harness loads into the agent and what your serving stack actually assembles at request time. Prompts, model pins, tool schemas, guardrail configs, and feature flags each flow into the agent through different deployment paths — code merges, config pushes, prompt-registry webhooks, experimentation platforms, runtime rollouts — and almost no team has a single source of truth that reconciles them. So the eval harness ends up measuring the version of the agent that exists in someone's PR branch, while production is running a union of yesterday's hotfix, last week's flag variant, and whatever the tool team pushed without telling anyone.

This is not a theoretical failure mode. It is the default state of any agent system older than three months whose config lives in more than one repository.

LLM-as-Judge Drift: When Your Evaluator Upgrades and All Your Numbers Move

· 11 min read
Tian Pan
Software Engineer

A regression suite that flips green-to-red without a single prompt change is usually one of three things: a broken test harness, a flaky retrieval store, or a judge that learned new taste over the weekend. The third one is the most common and the least debugged, because no commit in your repo caused it. The scoring model got a silent quality refresh, and every score you compare against last month's dashboard is now denominated in a different currency.

This is the uncomfortable part of LLM-as-judge: you have two moving models, not one. The candidate model is the thing you ship; the judge model is the thing that tells you how the candidate is doing. When both evolve independently, score deltas stop meaning what they used to, and the dashboard that your PM refreshes every morning quietly lies.

Plan-and-Execute Is Marketing, Not Contract: Plan Adherence as a First-Class SLI

· 9 min read
Tian Pan
Software Engineer

The agent printed a five-step plan. Step three said "fetch the user's billing history from the invoices service." The trace shows step three actually called the orders service, joined a stale customer table, and produced a number that looked right. The output passed the eval. The post-mortem found the regression six weeks later, when finance noticed the dashboard had quietly diverged from source-of-truth by 4%.

Nobody wrote a bug. The planner wrote a contract the executor never signed.

This is the failure mode plan-and-execute architectures bury under their own architectural elegance. The pattern was sold as a way to give agents long-horizon coherence: a strong model drafts a plan, weaker models execute steps, the plan acts as a scaffold. In practice the plan is a marketing artifact — a plausible-looking story emitted at t=0, then promptly invalidated by every interesting thing that happens at t>0. The trace shows the plan. The trace shows the actions. Almost nobody is measuring the distance between them.

Annotation-Free Evaluation: Measuring LLM Quality Before You Have Ground Truth

· 12 min read
Tian Pan
Software Engineer

Most teams ship an LLM feature, then spend weeks arguing about whether it's actually good. The evaluation question gets deferred because building a labeled dataset feels like a separate project. By the time you have ground truth, you've also accumulated two months of silent regressions you can never diagnose. This is backwards. You can get a meaningful quality signal in week one — before a single annotation is complete — if you know which techniques to reach for and where each one breaks.

This post is a field guide to annotation-free evaluation: the reference-free methods that work, the conditions they require, and the specific failure modes that will fool you if you're not careful.

Who Owns AI Quality? The Cross-Functional Vacuum That Breaks Production Systems

· 10 min read
Tian Pan
Software Engineer

When Air Canada's support chatbot promised customers a discount fare for recently bereaved travelers, the policy it described didn't exist. A court later ordered Air Canada to honor the hallucinated refund anyway. When a Chevrolet dealership chatbot negotiated away a 2024 Tahoe for $1, no mechanism stopped it. In both cases, the immediate question was about model quality. The real question — the one that matters operationally — was simpler: who was supposed to catch that?

The answer, in most organizations, is nobody specific. AI quality sits at the intersection of ML engineering, product management, data teams, and operations. Each function has a partial view. None claims full ownership. The result is a vacuum where things that should be caught aren't, and when something breaks, the postmortem produces a list of teams that each assumed someone else was responsible.

The Debug Tax: Why Debugging AI Systems Takes 10x Longer Than Building Them

· 10 min read
Tian Pan
Software Engineer

Building an LLM feature takes days. Debugging it in production takes weeks. This asymmetry — the debug tax — is the defining cost structure of AI engineering in 2026, and most teams don't account for it until they're already drowning.

A 2025 METR study found that experienced developers using LLM-assisted coding tools were actually 19% less productive, even as they perceived a 20% speedup. The gap between perceived and actual productivity is a microcosm of the larger problem: AI systems feel fast to build because the hard part — debugging probabilistic behavior in production — hasn't started yet.

The debug tax isn't a skill issue. It's a structural property of systems built on probabilistic inference. Traditional software fails with stack traces, error codes, and deterministic reproduction paths. LLM-based systems fail with plausible but wrong answers, intermittent quality degradation, and failures that can't be reproduced because the same input produces different outputs on consecutive runs. Debugging these systems requires fundamentally different methodology, tooling, and mental models.

The Semantic Failure Mode: When Your AI Runs Perfectly and Does the Wrong Thing

· 9 min read
Tian Pan
Software Engineer

Your AI agent completes the task. No errors in the logs. Latency looks normal. The output is well-formatted JSON, grammatically perfect prose, or a valid SQL query that executes without complaint. Every dashboard is green.

And the user stares at the result, sighs, and starts over from scratch.

This is the semantic failure mode — the class of production AI failures where the system runs correctly, the model responds confidently, and the output is delivered on time, but the agent didn't do what the user actually needed. Traditional error monitoring is completely blind to these failures because there is no error. The HTTP status is 200. The model didn't refuse. The output conforms to the schema. By every technical metric, the system succeeded.

LLM Evals: What Actually Works and What Wastes Your Time

· 10 min read
Tian Pan
Software Engineer

Most teams building LLM applications fall into one of two failure modes. The first is building no evals at all and shipping features on vibes. The second is building elaborate evaluation infrastructure before they understand what they're actually trying to measure. Both are expensive mistakes.

The teams that do evals well share a common approach: they start by looking at data, not by building systems. Error analysis comes before evaluation automation. Human judgment grounds the metrics before any automated judge is trusted. And they treat evaluation not as a milestone to cross but as a continuous discipline that evolves alongside the product.

This is what evals actually look like in practice — the decisions that matter, the patterns that waste effort, and the tradeoffs that aren't obvious until you've been burned.

Why Your LLM Evaluators Are Miscalibrated — and the Data-First Fix

· 9 min read
Tian Pan
Software Engineer

Most teams build their LLM evaluators in the wrong order. They write criteria, then look at data. That inversion is the root cause of miscalibrated evals, and it's almost universal in teams shipping their first AI product. The criteria sound reasonable on paper — "the response should be accurate, helpful, and concise" — but when you apply them to real model outputs, you discover the rubric doesn't match what you actually care about. You end up with an evaluator that grades things you're not measuring and misses failures that matter.

The fix isn't a better rubric. It's a different workflow: look at the data first, define criteria second, and then validate your evaluator against human judgment before trusting it to run unsupervised.