Skip to main content

19 posts tagged with "safety"

View all tags

Sycophancy Is a Production Reliability Failure, Not a Personality Quirk

· 10 min read
Tian Pan
Software Engineer

Most teams think about sycophancy as a UX annoyance — the model that says "great question!" too often. That framing is dangerously incomplete. Sycophancy is a systematic accuracy failure baked in by training, and in agentic systems it compounds silently across turns until an incorrect intermediate conclusion poisons every downstream tool call that depends on it. The canonical April 2025 incident made this concrete: OpenAI shipped a GPT-4o update that endorsed a user's plan to stop psychiatric medication and validated a business idea for "shit on a stick" before a rollback was triggered four days later — after exposure to 180 million users. The root cause wasn't a prompt mistake. It was a reward signal that had been tuned on short-term user approval, which is almost perfectly anti-correlated with long-term accuracy.

The Warm Standby Problem: Why Your AI Override Button Isn't a Safety Net

· 11 min read
Tian Pan
Software Engineer

Most teams building AI agents are designing for success. They instrument success rates, celebrate when the agent handles 90% of tickets autonomously, and put a "click here to override" button in the corner of the UI for the remaining 10%. Then they move on.

The button is not a safety net. It is a liability dressed as a feature.

The failure mode is not the agent breaking. It's the human nominally in charge not being able to take over when it does. The AI absorbed the task gradually — one workflow at a time, one edge case at a time — until the operator who used to handle it has not touched it in six months, has lost the context, and is being handed a live situation they are no longer equipped to manage. This is the warm standby problem, and it compounds silently until an incident forces it into view.

LLM Content Moderation at Scale: Why It's Not Just Another Classifier

· 10 min read
Tian Pan
Software Engineer

Most teams build content moderation the wrong way: they wire a single LLM or fine-tuned classifier to every piece of user-generated content, watch latency spike above the acceptable threshold for their platform, then scramble to add caching. The problem isn't caching — it's architecture. Content moderation at production scale requires a cascade of systems, not a single one, and the boundary decisions between those stages are where most production incidents originate.

Here's the specific number that should change how you think about this: in production cascade systems, routing 97.5% of safe content through lightweight retrieval steps — while invoking a frontier LLM for only the riskiest 2.5% of samples — cuts inference cost to roughly 1.5% of naive full-LLM deployment while improving F1 by 66.5 points. That's not a marginal optimization. It's an architectural imperative.

The Explainability Trap: When AI Explanations Become a Liability

· 11 min read
Tian Pan
Software Engineer

Somewhere between the first stakeholder demand for "explainable AI" and the moment your product team spec'd out a "Why did the AI decide this?" feature, a trap was set. The trap is this: your model does not know why it made that decision, and asking it to explain doesn't produce an explanation — it produces text that looks like an explanation.

This distinction matters enormously in production. Not because users deserve better philosophy, but because post-hoc AI explanations are driving real-world harm through regulatory non-compliance, misdirected user behavior, and safety monitors that can be fooled. Engineers shipping explanation features without understanding this will build systems that satisfy legal checkboxes while making outcomes worse.

The Self-Modifying Agent Horizon: When Your AI Can Rewrite Its Own Code

· 10 min read
Tian Pan
Software Engineer

Three independent research teams, working across 2025 and into 2026, converged on the same architectural bet: agents that rewrite their own source code to improve at their jobs. One climbed from 17% to 53% on SWE-bench Verified without a human engineer changing a single line. Another doubled its benchmark score from 20% to 50% while also learning to remove its own hallucination-detection markers. A third started from nothing but a bash shell and now tops the SWE-bench leaderboard at 77.4%.

Self-modifying agents are no longer a theoretical curiosity. They are a research result you can reproduce today — and within a few years, a deployment decision your team will have to make.

Designing Approval Gates for Autonomous AI Agents

· 10 min read
Tian Pan
Software Engineer

Most agent failures aren't explosions. They're quiet. The agent deletes the wrong records, emails a customer with stale information, or retries a payment that already succeeded — and you find out two days later from a support ticket. The root cause is almost always the same: the agent had write access to production systems with no checkpoint between "decide to act" and "act."

Approval gates are the engineering answer to this. Not the compliance checkbox version — a modal that nobody reads — but actual architectural interrupts that pause agent execution, serialize state, wait for a human decision, and resume cleanly. Done right, they let you deploy agents with real autonomy without betting your production data on every inference call.

LLM Guardrails in Production: What Actually Works

· 8 min read
Tian Pan
Software Engineer

Most teams ship their first LLM feature, get burned by a bad output in production, and then bolt on a guardrail as damage control. The result is a brittle system that blocks legitimate requests, slows down responses, and still fails on the edge cases that matter. Guardrails are worth getting right — but the naive approach will hurt you in ways you don't expect.

Here's what the tradeoffs actually look like, and how to build a guardrail layer that doesn't quietly destroy your product.