Skip to main content

One post tagged with "human-evaluation"

View all tags

The Annotator Calibration Gap: When Human Raters Quietly Stop Agreeing

· 10 min read
Tian Pan
Software Engineer

The dashboard says inter-rater agreement is 0.71. The model team is celebrating because the new prompt scored two points higher than the baseline. Nobody notices that six months ago, that same 0.71 was being generated by raters who all read the rubric the same way. Today it is generated by three raters who silently disagree on what "helpful" means, and whose disagreements happen to cancel out on the metric. Your evaluation instrument has bifurcated into a coalition of implicit rubrics, and the number on the dashboard is the weighted average of their fight.

This is the annotator calibration gap. It is the failure mode where a human evaluation pool, stood up to grade the cases LLM judges cannot reliably handle, slowly stops measuring what the team thought it was measuring. The model didn't get worse. The instrument did. And because the metric still produces a single tidy number, nobody notices until a launch goes sideways and a postmortem reveals that "helpful" meant three different things to three different raters for the last two quarters.