Skip to main content

553 posts tagged with "ai-engineering"

View all tags

Cross-Team Agent SLAs Don't Compose: The 99% Math Your Org Forgot to Budget

· 11 min read
Tian Pan
Software Engineer

Team A's agent advertises a 99% success rate. Team B's agent advertises 99%. The new joint workflow that calls both lands at 98% on a good day, 96% on a bad one — and the team that owns the joint workflow is now the de facto SRE for two systems they don't own, can't reproduce locally, and didn't write the eval set for. Each upstream team is hitting its SLO. The composite product is missing its SLO. Nobody's pager is ringing on the right side of the boundary.

This is the math of independent failure rates, and it has been hiding in plain sight ever since the org started letting agents call each other. Five components at 99% reliability give you 95% end-to-end. Ten components give you 90%. A 20-step process at 95% per-step succeeds 36% of the time — more than half of operations fail before completion. By the time a workflow chains 50 components — not unusual once an enterprise agent starts calling sub-agents that call tool agents — a system where every individual piece is "99% reliable" will fail roughly four out of ten requests.

Researchers analyzing five popular multi-agent frameworks across more than 150 tasks identified failure rates between 41% and 87%, with the top three failures being step repetition, reasoning–action mismatch, and unawareness of termination conditions — and unstructured multi-agent networks have been observed to amplify errors up to 17× compared to single-agent baselines. The math isn't subtle. The problem is that the org's SLO sheets, dashboards, on-call rotations, and PRDs are still scoped one agent at a time.

The Eval Bottleneck: Your Eval Engineer Is Now the Roadmap

· 11 min read
Tian Pan
Software Engineer

The constraint on your AI roadmap isn't GPU capacity, model availability, or prompt-engineering taste. It's the calendar of one or two engineers who actually know how to build an eval that catches a regression. Every PM with a feature is in their queue. Every model upgrade is in their queue. Every cohort drift, every prompt revision, every "is this judge still calibrated" question lands in the same inbox. And the engineer in question said "no, this isn't ready" three times this quarter, got overruled twice, watched the regression compound in production, and is now updating their LinkedIn.

This is the eval bottleneck, and most orgs don't see it until it bites. Through 2025 the visible scaling story was AI engineers — hire AI engineers, ship AI features, iterate on prompts, swap models. By Q1 2026 the throughput problem moved one layer down. The team that doubled its AI headcount discovered that adding more feature engineers didn't make features ship faster, because every feature still needed an eval, and the eval engineer was the same person.

Eval Differential as Branch Protection: Ship Score Diffs, Not Score Floors

· 10 min read
Tian Pan
Software Engineer

A team I worked with had a clean-looking eval gate: every prompt PR had to score above 0.85 on the golden set or the merge button stayed grey. They were proud of it. Six weeks in, average quality had quietly drifted from 0.93 to 0.87 — every PR cleared the bar, every PR landed, and no individual change owned the regression because none of them broke the rule. The bar was set against a snapshot of last quarter's quality, not against last week's.

That's the failure mode of an absolute-threshold eval gate: a PR that drops the score from 0.92 to 0.86 ships green, while a PR that lifts the score from 0.80 to 0.84 fails the same gate. The team learns "ship if it clears the bar" — a quality story. The signal you actually want is "ship if this change is non-regressive on the slices that matter" — a regression-detector story.

Coverage tools figured this out a decade ago. They report the diff against the parent commit and they break it down per file. Eval gates haven't caught up.

The Eval-Set Poison Pill: When Your Benchmark Becomes a Backdoor

· 10 min read
Tian Pan
Software Engineer

A team I know spent six months chasing a regression that wasn't there. Every release passed the eval. Every release shipped. Every quarter, NPS on the AI-served cohort drifted down a point. Eventually, an intern doing a routine audit of the gold dataset noticed that one labeler — long since rotated off the contract — had graded 11% of the items, and that those items were systematically more lenient on a specific failure mode the team had been racing to fix. The eval said the model was getting better. The model was not getting better. The eval had been quietly tilted by one human's calibration drift, and nobody had been watching the labelers because nobody believed the labelers were a threat surface.

This is the eval-set poison pill. Most teams treat their eval set as a trusted artifact: the labels were graded by humans, the data came from production, and the regression dashboard is the one thing the org agrees to defer to when shipping. But the labeling pipeline is a human supply chain, and human supply chains are gameable. Treating an eval as ground truth without applying supply-chain hygiene to its inputs is trusting a number whose provenance you cannot defend.

Your Gold Eval Set Has Drifted and Its Pass Rate Is the Reason You Can't See It

· 12 min read
Tian Pan
Software Engineer

The gold eval set passes at 94%. The model has been bumped twice this quarter, the prompt has been edited eleven times, the tool catalog has grown by four, and the dashboard is still green. Then a sales engineer forwards a transcript where the agent confidently routes a customer to a workflow that was sunset two months ago, and the head of support quietly opens a thread asking why the satisfaction scores have been sliding for six weeks while the eval pipeline reports no regressions. The gold set isn't lying. It's measuring last quarter's product against this quarter's traffic, and nobody asked it to do anything else.

This is the failure mode evaluation systems make hardest to see, because the instrument that's supposed to detect quality regressions is itself the source of the false positive. Pass rate is computed against the items in the set; the items in the set were curated against a snapshot of usage; usage moved on; the rate stayed clean. The team trusts the green dashboard, ships another model upgrade, and discovers months later that the production distribution has been measuring something different than the eval set has been measuring for longer than anyone wants to admit.

The fix is not to refresh the gold set more often. Refresh cadence is the wrong knob; the right knob is having a second instrument calibrated to a different time window so disagreement between the two surfaces drift before users do. That second instrument is the shadow eval — a parallel set rebuilt continuously from current production traffic, run alongside the gold set, with the explicit job of disagreeing with it.

The Human Attention Budget Is the Constraint Your HITL System Silently Overspends

· 10 min read
Tian Pan
Software Engineer

The 50th decision your reviewer makes this morning is not the same quality as the first. The architecture diagram does not show this. The capacity model does not show this. The dashboard tracking "approvals per hour" actively hides it. And yet the entire premise of your human-in-the-loop system — that a person catches what the model gets wrong — is silently degrading from the moment the queue begins to fill.

Most HITL designs treat reviewer time as an infinite, fungible resource. The team sets a confidence threshold, routes everything below it to a human queue, and declares the system "safe." Six weeks later, the approval rate has crept up to 96%, the queue is twice as deep as the staffing model assumed, and a sample audit shows that reviewers are clicking "approve" on edge cases they would have flagged on day one. The system has not failed. It has rubber-stamped its way into looking like it is working.

The Idle Agent Tax: What Your AI Session Costs While the User Is in a Meeting

· 11 min read
Tian Pan
Software Engineer

A developer opens their IDE copilot at 9:00, asks it three questions before standup, and then sits in meetings until 11:30. The chat panel is still open. The conversation is still scrollable. The model hasn't generated a token in two and a half hours. And yet that session — sitting there, attended by nobody — has been quietly accruing cost the entire morning. KV cache pinned. Prompt cache being kept warm by a periodic ping. Conversation state held in a hot store. Trace pipeline writing one row per heartbeat. Concurrency slot reserved on the model provider. Multiply by ten thousand seats and the bill is real.

This is the idle agent tax. It is the part of your inference budget that pays for capacity your users are not using, and it is invisible to most engineering dashboards because the dashboards were built for stateless APIs. A request comes in, a response goes out, the box closes. Done. Agentic products broke that model two years ago and most teams have not yet repriced their architecture around it.

Your Inference Chargeback Is Quietly Taxing Eval Discipline

· 12 min read
Tian Pan
Software Engineer

The FinOps team rolled out chargeback for AI a year ago. The dashboard is gorgeous. Every feature team can see, to the cent, what their inference bill was last month, and the platform PM has slides showing line-of-business attribution at the SKU level. The org has more AI features than it had a year ago. It also has worse AI quality. Nobody has connected the two facts yet, but they are the same fact.

Here is the failure mode in one sentence: chargeback prices the inference token and silently fails to price the eval token, so every PM on the org chart faces an incentive structure that rewards model upgrades and punishes evaluation discipline. Twelve months later, eval coverage is shrinking while the bill is growing — the precise opposite of what the FinOps initiative thought it was incentivizing. This is not a bug in the dashboard. It is the chargeback model functioning exactly as designed, in a domain where the design assumptions from cloud-cost FinOps no longer hold.

Inference Cost Forecasting: The Capacity Plan Your Finance Team Wants and You Can't Write

· 12 min read
Tian Pan
Software Engineer

Your finance team will ask for a capacity plan you cannot write. Not because you're inexperienced or because the model is new, but because the two assumptions classical capacity planning rests on — a workload distribution you can measure, and a unit cost stable on a quarter timescale — are both violated by AI workloads. The number you hand them will be wrong on day one, and when the variance hits, the conversation that follows will not be about the bill.

The 2026 State of FinOps report named AI as the fastest-growing new spend category, with a majority of respondents reporting that AI costs exceeded original budget projections — for many enterprises, inference now consumes the bulk of the AI bill. The instinct to manage this with a SaaS-style capacity plan — pick a peak QPS, multiply by a unit cost, add 30% buffer — produces a number with the texture of a forecast and the predictive power of a horoscope. The capacity plan you actually need looks more like a FinOps scenario model than a procurement spreadsheet, and the engineering work to produce it is platform work that competes with feature work until the day finance loses patience.

The LLM-Judge Ceiling: Why Your Auto-Eval Stops Correlating With Users at the Score That Matters

· 10 min read
Tian Pan
Software Engineer

LLM-as-judge is the productivity unlock that let evaluation coverage scale 10x without growing the human grading team. The problem is that the unlock is not uniform across the score range. The judge's agreement with humans is highest in the muddy middle of the distribution — the answers nobody is going to escalate either way — and collapses on the long tail of high-stakes outputs that actually decide whether a feature ships, gets rolled back, or paged at 2am. The dashboard graph stays green through the score range that nobody is ever happy with.

That is the LLM-judge ceiling: a measurement instrument with a non-uniform error profile that the team is reading as a single number. Aggregate agreement of 80% with humans is the headline most vendors put on the page; it is also the number that gets the team to trust the judge most where the judge is least informative.

Your APM Is Quietly Dropping LLM Telemetry, and the Bug Lives in the Gap

· 11 min read
Tian Pan
Software Engineer

There is a broken prompt in your system right now that affects roughly three percent of traffic, and your dashboards do not know it exists. The p99 latency chart is green. The error rate is flat. The model-call success metric is at four nines. The only place the failure shows up is in a customer support ticket the platform team cannot reproduce, and by the time the ticket reaches a debugging session, the trace has been sampled away.

This is not a monitoring gap. It is a category mistake. The APM you are running was designed for a world in which dimensions are bounded sets — endpoint, status_code, region, service — and the cost of an additional label is at most a few new time series. LLM workloads do not fit that shape at all. The interesting dimensions are the user's prompt, the retrieved context IDs, the tool-call sequence, the model revision, the prompt template version, the tenant, the locale, the eval bucket the request fell into. Every one of those is high-cardinality, and any subset of them is enough to detonate the metrics store the moment you tag a span with it.

The Model-Preference Fork: Why Your Prompt Library Has Three Versions and No One Is Tracking the Drift

· 11 min read
Tian Pan
Software Engineer

Open the prompt library of any team that has been shipping LLM features for more than a year and you will find the same thing: three slightly different versions of every prompt. One was tuned by the engineer who likes Sonnet for its instruction-following. One was rewritten by the engineer who switched to Haiku for the latency budget. One belongs to the prototype that only ever worked on Opus and never got migrated. Each version has a slightly different system message, a different way of describing the tool catalog, a different formatting nudge — and nobody is tracking how they drift.

This is not a hygiene problem. It is a coordination tax that compounds at every model upgrade, and it is silently breaking the relationship between your eval suite and your production traffic. The library is supposed to be a shared resource. In practice, every feature ships with whichever variant the author last tested, the eval suite runs against the variant the eval-author preferred, and the routing layer chooses among them based on cost rather than on which variant was actually validated against the live eval.

The team that doesn't notice is the team that's already paying.