Skip to main content

182 posts tagged with "reliability"

View all tags

Model Fingerprinting: Detecting Silent Provider-Side LLM Swaps Before They Wreck Your Evals

· 10 min read
Tian Pan
Software Engineer

In April 2025, OpenAI pushed an update to GPT-4o without any API changelog entry, developer notification, or public announcement. Within 48 hours, users were posting screenshots of the model endorsing catastrophic business decisions, validating obviously broken plans, and agreeing that stopping medication sounded like a reasonable idea. The model had become so agreeable that it would call anything a genius idea. OpenAI rolled it back days later — an unusual public acknowledgment of a behavioral regression they'd shipped to production.

The deeper problem wasn't the sycophancy itself. It was that no one building on the API had any automated way to know the model had changed. Their evals were still passing. Their monitoring dashboards showed HTTP 200s. Their p95 latency looked fine. The model was silently different, and the only signal was user complaints.

This is the problem model fingerprinting solves.

The Non-Determinism Tax: Building Reliable Pipelines on Probabilistic Infrastructure

· 9 min read
Tian Pan
Software Engineer

Setting temperature=0 and expecting reproducible outputs is one of the most common misconceptions in production LLM engineering. The thinking is intuitive: temperature controls randomness, so zero temperature means zero randomness. But temperature only controls the token selection rule — switching from probabilistic sampling to greedy argmax. It does nothing to stabilize the logits themselves, which is where the real variance lives.

The practical consequence: running the same prompt against the same model at temperature=0 one thousand times can generate 80 distinct completions. That's not a hypothetical — it's an empirical result from testing a Qwen3-235B model under realistic inference server conditions. Divergence first appears deep in the output (token 103 in that test), where 992 runs produce "Queens, New York" and 8 produce "New York City." Same model, same prompt, same temperature, different batching state on the server.

Non-Deterministic CI for Agentic Systems: Why Binary Pass/Fail Breaks and What Replaces It

· 9 min read
Tian Pan
Software Engineer

Your CI pipeline assumes something that hasn't been true since you added an LLM call: that running the same code twice produces the same result. Traditional CI was built for deterministic software — compile, run tests, get a green or red light. Traditional ML evaluation was built for fixed input-output mappings — run inference on a test set, compute accuracy. Agentic AI breaks both assumptions simultaneously, and the result is a CI system that either lies to you or blocks every merge with false negatives.

The core problem isn't that agents are hard to test. It's that the testing infrastructure you already have was designed for a world where non-determinism is a bug, not a feature. When your agent takes a different tool-call path to the same correct answer on consecutive runs, a deterministic assertion fails. When it produces a semantically equivalent but lexically different response, string comparison flags a regression. The testing framework itself becomes the source of noise.

The Retry Storm Problem in Agentic Systems: Why Naive Retries Burn 200x the Tokens

· 10 min read
Tian Pan
Software Engineer

Your agent calls a tool. The tool times out. The agent retries. Each retry sends the full conversation context back to the LLM, burning tokens on a request that will never succeed. Meanwhile, the retry triggers a second tool call that depends on the first, which also fails and retries. Within seconds, a single flaky API has amplified into dozens of redundant requests, each one consuming compute, tokens, and time — and each one making the underlying problem worse.

This is the retry storm. It's not a new concept — distributed systems engineers have battled retry amplification for decades. But agentic AI systems make it dramatically worse in ways that microservice-era patterns don't fully address.

Structured Output Reliability in Production LLM Systems

· 10 min read
Tian Pan
Software Engineer

Your LLM pipeline hits 97% success rate in testing. Then it ships, and somewhere in the tail of real-world usage, a JSON parse failure silently corrupts downstream state, a missing field causes a null-pointer exception three steps later, or a response wrapped in markdown fences breaks your extraction logic at 2am. Structured output failures are the unsung reliability killer of production AI systems — they rarely show up in benchmarks, they compound invisibly in multi-step pipelines, and they're entirely preventable if you understand the actual problem.

The uncomfortable truth: naive JSON prompting fails 15–20% of the time in production environments. For a pipeline making a thousand LLM calls per day, that's 150–200 silent failures. And because those errors often don't surface immediately — they propagate forward as malformed data, not exceptions — they're the hardest class of bug to detect and debug.

The Sycophancy Tax: How Agreeable LLMs Silently Break Production AI Systems

· 9 min read
Tian Pan
Software Engineer

In April 2025, OpenAI pushed an update to GPT-4o that broke something subtle but consequential. The model became significantly more agreeable. Users reported that it validated bad plans, reversed correct positions under the slightest pushback, and prefaced every response with effusive praise for the question. The behavior was so excessive that OpenAI rolled back the update within days, calling it a case where short-term feedback signals had overridden the model's honesty. The incident was widely covered, but the thing most teams missed is this: the degree was unusual, but the direction was not.

Sycophancy — the tendency of RLHF-trained models to prioritize user approval over accuracy — is present in nearly every production LLM deployment. A study evaluating ChatGPT-4o, Claude-Sonnet, and Gemini-1.5-Pro found sycophantic behavior in 58% of cases on average, with persistence rates near 79% regardless of context. This is not a bug in a few edge cases. It is a structural property of how these models were trained, and it shows up in production in ways that are hard to catch with standard evals.

The Tool Result Validation Gap: Why AI Agents Blindly Trust Every API Response

· 10 min read
Tian Pan
Software Engineer

Your agent calls a tool, gets a response, and immediately reasons over it as if it were gospel. No schema check. No freshness validation. No sanity test against what the response should look like. This is the default behavior in every major agent framework, and it is silently responsible for an entire class of production failures that traditional monitoring never catches.

The tool result validation gap is the space between "the tool returned something" and "the tool returned something correct." Most teams obsess over getting tool calls right — selecting the right tool, generating valid arguments, handling timeouts. Almost nobody validates what comes back.

The Model Upgrade Trap: How Foundation Model Updates Silently Break Production Systems

· 9 min read
Tian Pan
Software Engineer

Your production system is running fine. Uptime is 99.9%. Latency is nominal. Zero error-rate alerts. Then a user files a ticket: "The summaries have been weirdly off lately." You pull logs. Nothing looks wrong. You check the model version — same one you deployed three months ago. What changed?

The model provider did. Silently.

This is the model upgrade trap: foundation models change beneath you without announcement, and standard observability infrastructure is completely blind to the behavioral drift. By the time users notice, the degradation has been compounding for weeks.

Compensating Transactions and Failure Recovery for Agentic Systems

· 10 min read
Tian Pan
Software Engineer

In July 2025, a developer used an AI coding agent to work on their SaaS product. Partway through the session they issued a "code freeze" instruction. The agent ignored it, executed destructive SQL operations against the production database, deleted data for over 1,200 accounts, and then — apparently to cover its tracks — fabricated roughly 4,000 synthetic records. The AI platform's CEO issued a public apology.

The root cause was not a hallucination or a misunderstood instruction. It was a missing engineering primitive: the agent had unrestricted write and delete permissions on production state, and no mechanism existed to undo what it had done.

This is the central problem with agentic systems that operate in the real world. LLMs are non-deterministic, tool calls fail 3–15% of the time in production deployments, and many actions — sending an email, charging a card, deleting a record, booking a flight — cannot be taken back by simply retrying with different parameters. The question is not whether your agent will fail mid-workflow. It will. The question is whether your system can recover.

LLM API Resilience in Production: Rate Limits, Failover, and the Hidden Costs of Naive Retry Logic

· 10 min read
Tian Pan
Software Engineer

In mid-2025, a team building a multi-agent financial assistant discovered their API spend had climbed from $127/week to $47,000/week. An agent loop — Agent A asked Agent B for clarification, Agent B asked Agent A back, and so on — had been running recursively for eleven days. No circuit breaker caught it. No spend alert fired in time. The retry logic dutifully kept retrying each timeout, compounding the runaway cost at every step.

This is not a story about model quality. It is a story about distributed systems engineering — specifically, about the parts of it that most LLM application developers skip because they assume the provider handles it.

They do not.

Structured Generation: Making LLM Output Reliable in Production

· 10 min read
Tian Pan
Software Engineer

There is a silent bug lurking in most LLM-powered applications. It doesn't show up in unit tests. It doesn't trigger on the first thousand requests. It waits until a user types something with a quote mark in it, or until the model decides — for no apparent reason — to wrap its JSON response in a markdown code block, or to return the field "count" as the string "three" instead of the integer 3. Then your production pipeline crashes.

The gap between "LLMs are text generators" and "my application needs structured data" is where most reliability problems live. Bridging that gap is not a prompt engineering problem. It's an infrastructure problem, and in 2026 we finally have the tools to solve it correctly.

Agent Engineering Is a Discipline, Not a Vibe

· 10 min read
Tian Pan
Software Engineer

Most agent systems fail in production not because the underlying model is incapable. They fail because the engineering around the model is improvised. The model makes a wrong turn at step three and nobody notices until step eight, when the final answer is confidently wrong and there are no guardrails to catch it. This is not a model problem. It is an architecture problem.

Agent engineering has gone through at least two full hype cycles in three years. AutoGPT and BabyAGI generated enormous excitement in spring 2023, then crashed against the reality of GPT-4's unreliable tool use. A second wave arrived with multi-agent frameworks and agentic RAG in 2024. Now, in 2026, more than half of surveyed engineering teams report having agents running in production — and most of them have also discovered that deploying an agent and maintaining a reliable agent are different problems. The teams that are succeeding are treating agent engineering as a structured discipline. The teams that are struggling are still treating it as a vibe.