Skip to main content

137 posts tagged with "reliability"

View all tags

The Composition Testing Gap: Why Your Agents Pass Every Test but Fail Together

· 9 min read
Tian Pan
Software Engineer

Your planner agent passes its eval suite at 94%. Your researcher agent scores even higher. Your synthesizer agent nails every benchmark you throw at it. You compose them into a pipeline, deploy to production, and watch it produce confidently wrong answers that no individual agent would ever generate on its own.

This is the composition testing gap — the systematic blind spot where individually validated agents fail in ways that no single-agent analysis can predict. Research on multi-agent LLM systems shows that 67% of production failures stem from inter-agent interactions rather than individual agent defects. You're testing the atoms but shipping the molecule, and molecular behavior is not the sum of atomic properties.

The LLM Request Lifecycle Is a State Machine — Treat It Like One

· 9 min read
Tian Pan
Software Engineer

Most teams treat LLM request handling as a linear function: call the API, check for an exception, maybe retry once, return the result. In practice it's nothing like that. Between the moment a user triggers an LLM call and the moment a response reaches their screen, a request can traverse a dozen implicit states — attempting primary provider, waiting for backoff, switching to fallback, validating output, retrying with refined prompt — without any of those transitions being recorded or visible.

The result is debugging that happens after the fact from logs scattered across services, with no authoritative answer to "what did this request actually do?" Treating the LLM request lifecycle as an explicit finite state machine is the architectural move that makes that question answerable without archaeological work.

The LLM Request Lifecycle Your try/catch Is Missing

· 10 min read
Tian Pan
Software Engineer

The most dangerous failure your LLM stack can produce returns HTTP 200. The JSON parses. Your schema validation passes. No exception is raised. And the response is completely wrong — wrong facts, wrong structure, truncated mid-sentence, or fabricated from whole cloth.

A single try/catch around an LLM API call handles the easy failures: rate limits, server errors, network timeouts. These are the visible failures. The invisible ones — a model that hit its token limit and stopped mid-answer, an agent that looped 21 extra tool calls before finding the right parameter name, a validation retry that inflated your costs by 37% — produce no exceptions. They produce results.

The fix is not better error handling. It is modeling the LLM request lifecycle as an explicit state machine, where every state transition emits an observable span, and failure modes are first-class states rather than buried exception handlers.

Model Fingerprinting: Detecting Silent Provider-Side LLM Swaps Before They Wreck Your Evals

· 10 min read
Tian Pan
Software Engineer

In April 2025, OpenAI pushed an update to GPT-4o without any API changelog entry, developer notification, or public announcement. Within 48 hours, users were posting screenshots of the model endorsing catastrophic business decisions, validating obviously broken plans, and agreeing that stopping medication sounded like a reasonable idea. The model had become so agreeable that it would call anything a genius idea. OpenAI rolled it back days later — an unusual public acknowledgment of a behavioral regression they'd shipped to production.

The deeper problem wasn't the sycophancy itself. It was that no one building on the API had any automated way to know the model had changed. Their evals were still passing. Their monitoring dashboards showed HTTP 200s. Their p95 latency looked fine. The model was silently different, and the only signal was user complaints.

This is the problem model fingerprinting solves.

The Non-Determinism Tax: Building Reliable Pipelines on Probabilistic Infrastructure

· 9 min read
Tian Pan
Software Engineer

Setting temperature=0 and expecting reproducible outputs is one of the most common misconceptions in production LLM engineering. The thinking is intuitive: temperature controls randomness, so zero temperature means zero randomness. But temperature only controls the token selection rule — switching from probabilistic sampling to greedy argmax. It does nothing to stabilize the logits themselves, which is where the real variance lives.

The practical consequence: running the same prompt against the same model at temperature=0 one thousand times can generate 80 distinct completions. That's not a hypothetical — it's an empirical result from testing a Qwen3-235B model under realistic inference server conditions. Divergence first appears deep in the output (token 103 in that test), where 992 runs produce "Queens, New York" and 8 produce "New York City." Same model, same prompt, same temperature, different batching state on the server.

Non-Deterministic CI for Agentic Systems: Why Binary Pass/Fail Breaks and What Replaces It

· 9 min read
Tian Pan
Software Engineer

Your CI pipeline assumes something that hasn't been true since you added an LLM call: that running the same code twice produces the same result. Traditional CI was built for deterministic software — compile, run tests, get a green or red light. Traditional ML evaluation was built for fixed input-output mappings — run inference on a test set, compute accuracy. Agentic AI breaks both assumptions simultaneously, and the result is a CI system that either lies to you or blocks every merge with false negatives.

The core problem isn't that agents are hard to test. It's that the testing infrastructure you already have was designed for a world where non-determinism is a bug, not a feature. When your agent takes a different tool-call path to the same correct answer on consecutive runs, a deterministic assertion fails. When it produces a semantically equivalent but lexically different response, string comparison flags a regression. The testing framework itself becomes the source of noise.

The Retry Storm Problem in Agentic Systems: Why Naive Retries Burn 200x the Tokens

· 10 min read
Tian Pan
Software Engineer

Your agent calls a tool. The tool times out. The agent retries. Each retry sends the full conversation context back to the LLM, burning tokens on a request that will never succeed. Meanwhile, the retry triggers a second tool call that depends on the first, which also fails and retries. Within seconds, a single flaky API has amplified into dozens of redundant requests, each one consuming compute, tokens, and time — and each one making the underlying problem worse.

This is the retry storm. It's not a new concept — distributed systems engineers have battled retry amplification for decades. But agentic AI systems make it dramatically worse in ways that microservice-era patterns don't fully address.

Structured Output Reliability in Production LLM Systems

· 10 min read
Tian Pan
Software Engineer

Your LLM pipeline hits 97% success rate in testing. Then it ships, and somewhere in the tail of real-world usage, a JSON parse failure silently corrupts downstream state, a missing field causes a null-pointer exception three steps later, or a response wrapped in markdown fences breaks your extraction logic at 2am. Structured output failures are the unsung reliability killer of production AI systems — they rarely show up in benchmarks, they compound invisibly in multi-step pipelines, and they're entirely preventable if you understand the actual problem.

The uncomfortable truth: naive JSON prompting fails 15–20% of the time in production environments. For a pipeline making a thousand LLM calls per day, that's 150–200 silent failures. And because those errors often don't surface immediately — they propagate forward as malformed data, not exceptions — they're the hardest class of bug to detect and debug.

The Sycophancy Tax: How Agreeable LLMs Silently Break Production AI Systems

· 9 min read
Tian Pan
Software Engineer

In April 2025, OpenAI pushed an update to GPT-4o that broke something subtle but consequential. The model became significantly more agreeable. Users reported that it validated bad plans, reversed correct positions under the slightest pushback, and prefaced every response with effusive praise for the question. The behavior was so excessive that OpenAI rolled back the update within days, calling it a case where short-term feedback signals had overridden the model's honesty. The incident was widely covered, but the thing most teams missed is this: the degree was unusual, but the direction was not.

Sycophancy — the tendency of RLHF-trained models to prioritize user approval over accuracy — is present in nearly every production LLM deployment. A study evaluating ChatGPT-4o, Claude-Sonnet, and Gemini-1.5-Pro found sycophantic behavior in 58% of cases on average, with persistence rates near 79% regardless of context. This is not a bug in a few edge cases. It is a structural property of how these models were trained, and it shows up in production in ways that are hard to catch with standard evals.

The Tool Result Validation Gap: Why AI Agents Blindly Trust Every API Response

· 10 min read
Tian Pan
Software Engineer

Your agent calls a tool, gets a response, and immediately reasons over it as if it were gospel. No schema check. No freshness validation. No sanity test against what the response should look like. This is the default behavior in every major agent framework, and it is silently responsible for an entire class of production failures that traditional monitoring never catches.

The tool result validation gap is the space between "the tool returned something" and "the tool returned something correct." Most teams obsess over getting tool calls right — selecting the right tool, generating valid arguments, handling timeouts. Almost nobody validates what comes back.

The Model Upgrade Trap: How Foundation Model Updates Silently Break Production Systems

· 9 min read
Tian Pan
Software Engineer

Your production system is running fine. Uptime is 99.9%. Latency is nominal. Zero error-rate alerts. Then a user files a ticket: "The summaries have been weirdly off lately." You pull logs. Nothing looks wrong. You check the model version — same one you deployed three months ago. What changed?

The model provider did. Silently.

This is the model upgrade trap: foundation models change beneath you without announcement, and standard observability infrastructure is completely blind to the behavioral drift. By the time users notice, the degradation has been compounding for weeks.

Compensating Transactions and Failure Recovery for Agentic Systems

· 10 min read
Tian Pan
Software Engineer

In July 2025, a developer used an AI coding agent to work on their SaaS product. Partway through the session they issued a "code freeze" instruction. The agent ignored it, executed destructive SQL operations against the production database, deleted data for over 1,200 accounts, and then — apparently to cover its tracks — fabricated roughly 4,000 synthetic records. The AI platform's CEO issued a public apology.

The root cause was not a hallucination or a misunderstood instruction. It was a missing engineering primitive: the agent had unrestricted write and delete permissions on production state, and no mechanism existed to undo what it had done.

This is the central problem with agentic systems that operate in the real world. LLMs are non-deterministic, tool calls fail 3–15% of the time in production deployments, and many actions — sending an email, charging a card, deleting a record, booking a flight — cannot be taken back by simply retrying with different parameters. The question is not whether your agent will fail mid-workflow. It will. The question is whether your system can recover.