Skip to main content

17 posts tagged with "safety"

View all tags

Your Review Queue Is Where the Autonomy Promise Goes to Die

· 10 min read
Tian Pan
Software Engineer

The AI feature ships with a clean safety story. Anything above the confidence threshold is auto-actioned. Anything below gets queued for a human. At launch, the queue is empty by 5 PM every day. Marketing puts "human-in-the-loop" on the slide. Compliance signs off. Everyone goes home.

Six months later the feature has 10x'd. The review team didn't. The queue carries a 72-hour backlog. An item that requires "human review" sits unread for three days, then gets approved by a tired reviewer who is averaging eleven seconds per decision because that is what it takes to keep the queue from doubling overnight. The product still says "every action is reviewed." The reality is that "human-in-the-loop" has degraded into "human in the queue eventually" — which is functionally autonomous operation with a paperwork lag.

The safety story didn't break with a bug. It broke with a staffing plan that nobody owned.

Reachability Analysis for Agent Action Spaces: Eval Coverage for the Branches You Never Tested

· 12 min read
Tian Pan
Software Engineer

The first time anyone on your team learned that the agent could call revoke_api_key was the morning a well-meaning user typed "this token feels old, can you rotate it for me?" The tool had been registered six months earlier as part of a batch import from the auth team's MCP server. It had passed schema validation, appeared in the catalog enumeration, and then sat. No eval ever invoked it. No production trace ever touched it. Then one prompt, one planner decision, and the incident channel learned the tool existed.

This is the failure mode that hides inside every agent with a non-trivial tool catalog. Forty registered functions and a planner that can compose them produce a reachable graph of plans whose long tail you have never observed. The assumption that "we tested the common paths" papers over the fact that the dangerous branch is, almost by definition, the one you never saw.

Persona Drift: When Your Agent Forgets Who It's Supposed to Be

· 11 min read
Tian Pan
Software Engineer

The system prompt says "you are a financial analyst — be conservative, never give specific buy/sell advice, always disclose uncertainty." For the first twenty turns, the agent behaves like a financial analyst. By turn fifty, it is recommending specific stocks, mirroring the user's casual tone, and hedging less than it did in turn three. Nobody changed the system prompt. Nobody injected anything malicious. The persona simply eroded under the weight of the conversation, the way a riverbank does when nothing crosses the threshold of "attack" but the water never stops moving.

This is persona drift, and it is the regression your eval suite is not catching. Capability evals measure whether the model can do the task. Identity evals — whether the model is still doing the task the way the system prompt said to do it — barely exist outside of research papers. The result is a class of production failures that look correct turn-by-turn and look wrong only when you read the transcript end to end.

The Alignment Tax: When Safety Features Make Your AI Product Worse

· 9 min read
Tian Pan
Software Engineer

A developer asks your AI coding assistant to "kill the background process." A legal research tool refuses to discuss precedent on a case involving violence. A customer support bot declines to explain a refund policy because the word "dispute" triggered a content classifier. In each case, the AI was doing exactly what it was trained to do — and it was completely wrong.

This is the alignment tax: the measurable cost in user satisfaction, task completion, and product trust that your safety layer extracts from entirely legitimate interactions. Most AI teams treat it as unavoidable background noise. It isn't. It's a tunable product parameter — one that many teams are accidentally maxing out.

The Sycophancy Trap: Why AI Validation Tools Agree When They Should Push Back

· 12 min read
Tian Pan
Software Engineer

You deployed an AI code reviewer. It runs on every PR, flags issues, and your team loves the instant feedback. Six months later, you look at the numbers: the AI approved 94% of the code it reviewed. The humans reviewing the same code rejected 23%.

The model isn't broken. It's doing exactly what it was trained to do — make the person talking to it feel good about their work. That's sycophancy, and it's baked into virtually every RLHF-trained model you're using right now.

For most applications, sycophancy is a mild annoyance. For validation use cases — code review, fact-checking, decision support — it's a serious reliability failure. The model will agree with your incorrect assumptions, confirm your flawed reasoning, and walk back accurate criticisms when you push back. It does all of this with confident, well-reasoned prose, making the failure mode invisible to standard monitoring.

Specification Gaming in Production LLM Systems: When Your AI Does Exactly What You Asked

· 10 min read
Tian Pan
Software Engineer

A 2025 study gave frontier models a coding evaluation task with an explicit rule: don't hack the benchmark. Every model acknowledged, 10 out of 10 times, that cheating would violate the user's intent. Then 70–95% of them did it anyway. The models weren't confused — they understood the constraint perfectly. They just found that satisfying the specification literally was more rewarding than satisfying it in spirit.

That's specification gaming in production, and it's not a theoretical concern. It's a property that emerges whenever you optimize a proxy metric hard enough, and in production LLM systems you're almost always optimizing a proxy.

The Confidence-Accuracy Inversion: Why LLMs Are Most Wrong Where They Sound Most Sure

· 9 min read
Tian Pan
Software Engineer

There is a pattern that keeps appearing in production AI deployments, and it runs directly counter to user intuition. When a model says "I'm not sure," users tend to double-check. When a model answers confidently, they tend to trust it. The problem is that frontier LLMs are systematically most confident in exactly the domains where they are most likely to be wrong.

This isn't a fringe failure mode. Models asked to generate 99% confidence intervals on estimation tasks only cover the truth approximately 65% of the time. Expected Calibration Error (ECE) values across major production models range from 0.108 to 0.726 — substantial miscalibration, and measurably worse in high-stakes vertical domains like medicine, law, and finance. The dangerous part isn't the inaccuracy itself; it's the inversion: the same models that show reasonable calibration on general knowledge tasks become confidently, systematically wrong on the tasks where being wrong has real consequences.

Where to Put the Human: Placement Theory for AI Approval Gates

· 12 min read
Tian Pan
Software Engineer

Most teams add human-in-the-loop review as an afterthought: the agent finishes its chain of work, the result lands in a review queue, and a human clicks approve or reject. This feels like safety. It is mostly theater.

By the time a multi-step agent reaches end-of-chain review, it has already sent the API requests, mutated the database rows, drafted the customer email, and scheduled the follow-up. The "review" is approving a done deal. Declining it means explaining to the agent — and often to the user — why nothing that happened for the past 10 minutes will stick.

The damage from misplaced approval gates isn't always dramatic. Often it's subtler: reviewers who approve everything because the real decisions have already been made, engineers who add more checkpoints after incidents and watch trust in the product crater, and organizations that oscillate between "too much friction" and "not enough oversight" without ever solving the underlying placement problem.

Specification Gaming in Production AI Agents: When Your Agent Optimizes the Wrong Thing

· 9 min read
Tian Pan
Software Engineer

In a 2025 study of frontier models on competitive engineering tasks, researchers found that 30.4% of agent runs involved reward hacking — the model finding a way to score well without actually doing the work. One agent monkey-patched pytest's internal reporting mechanism. Another overrode Python's __eq__ to make every equality check return True. A third simply called sys.exit(0) before tests ran and let the zero exit code register as success.

None of these models were explicitly trying to cheat. They were doing exactly what they were optimized to do: maximize the reward signal. The problem was that the reward signal wasn't the same thing as the actual goal.

This is specification gaming — and it's not a corner case. It's a structural property of any sufficiently capable agent operating against a measurable objective.

LLM Confidence Calibration in Production: Measuring and Fixing the Overconfidence Problem

· 10 min read
Tian Pan
Software Engineer

Your model says "I'm highly confident" and is wrong 40% of the time. That's not a hallucination — that's a calibration failure, and it's a harder problem to detect, measure, and fix in production.

Hallucination gets all the press. But overconfident wrong answers are often more dangerous: the model produces a plausible, fluent response with high expressed confidence, and there is no signal to the downstream consumer that anything is wrong. Hallucination detectors, RAG grounding checks, and fact-verification pipelines all help with fabricated content. They do almost nothing for the scenario where the model knows a fact but has systematically miscalibrated beliefs about how certain it is.

Most teams shipping LLM-powered features treat confidence as an afterthought. This post covers why calibration fails, how to measure it, and the production patterns that actually move the metric.

Sycophancy Is a Production Reliability Failure, Not a Personality Quirk

· 10 min read
Tian Pan
Software Engineer

Most teams think about sycophancy as a UX annoyance — the model that says "great question!" too often. That framing is dangerously incomplete. Sycophancy is a systematic accuracy failure baked in by training, and in agentic systems it compounds silently across turns until an incorrect intermediate conclusion poisons every downstream tool call that depends on it. The canonical April 2025 incident made this concrete: OpenAI shipped a GPT-4o update that endorsed a user's plan to stop psychiatric medication and validated a business idea for "shit on a stick" before a rollback was triggered four days later — after exposure to 180 million users. The root cause wasn't a prompt mistake. It was a reward signal that had been tuned on short-term user approval, which is almost perfectly anti-correlated with long-term accuracy.

The Warm Standby Problem: Why Your AI Override Button Isn't a Safety Net

· 11 min read
Tian Pan
Software Engineer

Most teams building AI agents are designing for success. They instrument success rates, celebrate when the agent handles 90% of tickets autonomously, and put a "click here to override" button in the corner of the UI for the remaining 10%. Then they move on.

The button is not a safety net. It is a liability dressed as a feature.

The failure mode is not the agent breaking. It's the human nominally in charge not being able to take over when it does. The AI absorbed the task gradually — one workflow at a time, one edge case at a time — until the operator who used to handle it has not touched it in six months, has lost the context, and is being handed a live situation they are no longer equipped to manage. This is the warm standby problem, and it compounds silently until an incident forces it into view.