Skip to main content

9 posts tagged with "human-in-the-loop"

View all tags

Defining Escalation Criteria That Actually Work in Human-AI Teams

· 10 min read
Tian Pan
Software Engineer

Most AI teams can tell you their containment rate — the percentage of interactions the AI handled without routing to a human. Far fewer can tell you whether that number is the right one.

Escalation criteria are the single most important design document in an AI-augmented team, and most teams don't have one. They have a threshold buried in a YAML file and an implicit assumption that the AI knows when it's stuck. That assumption is wrong in both directions: too high a threshold and humans spend their days redoing AI work; too low and users absorb AI errors without recourse. Both failures are invisible until they compound.

Where to Put the Human: Placement Theory for AI Approval Gates

· 12 min read
Tian Pan
Software Engineer

Most teams add human-in-the-loop review as an afterthought: the agent finishes its chain of work, the result lands in a review queue, and a human clicks approve or reject. This feels like safety. It is mostly theater.

By the time a multi-step agent reaches end-of-chain review, it has already sent the API requests, mutated the database rows, drafted the customer email, and scheduled the follow-up. The "review" is approving a done deal. Declining it means explaining to the agent — and often to the user — why nothing that happened for the past 10 minutes will stick.

The damage from misplaced approval gates isn't always dramatic. Often it's subtler: reviewers who approve everything because the real decisions have already been made, engineers who add more checkpoints after incidents and watch trust in the product crater, and organizations that oscillate between "too much friction" and "not enough oversight" without ever solving the underlying placement problem.

The Warm Handoff Pattern: Designing Fluid Control Transfer Between Agents and Humans

· 12 min read
Tian Pan
Software Engineer

Most agent escalation flows are cold transfers dressed up with good intentions. The agent decides it cannot proceed, drops a "I'm connecting you to a human" message, and routes the session to an operator who has no idea what the agent tried, what failed, or what the user actually needs. The human starts from scratch. The user repeats themselves. Trust erodes — not because the AI was wrong, but because nobody designed the boundary.

The warm handoff pattern is an architectural discipline for the exact moment an agent yields control. It treats that boundary as a first-class system concern rather than an afterthought. Done well, the receiving party — human or agent — steps into a briefed, structured situation. Done poorly, that boundary is where user trust goes to die.

The HITL Rubber Stamp Problem: Why Human-in-the-Loop Often Means Neither

· 9 min read
Tian Pan
Software Engineer

There's a paradox sitting at the center of responsible AI deployment: the more you try to involve humans in reviewing AI decisions, the less meaningful that review becomes.

A 2024 Harvard Business School study gave 228 evaluators AI recommendations with clear explanations of the AI's reasoning. Human reviewers were 19 percentage points more likely to align with AI recommendations than the control group. When the AI also provided narrative rationales — when it explained why it made a decision — deference increased by another 5 points. Better explainability produced worse oversight. The human in the loop had become a rubber stamp on a form.

The Warm Standby Problem: Why Your AI Override Button Isn't a Safety Net

· 11 min read
Tian Pan
Software Engineer

Most teams building AI agents are designing for success. They instrument success rates, celebrate when the agent handles 90% of tickets autonomously, and put a "click here to override" button in the corner of the UI for the remaining 10%. Then they move on.

The button is not a safety net. It is a liability dressed as a feature.

The failure mode is not the agent breaking. It's the human nominally in charge not being able to take over when it does. The AI absorbed the task gradually — one workflow at a time, one edge case at a time — until the operator who used to handle it has not touched it in six months, has lost the context, and is being handed a live situation they are no longer equipped to manage. This is the warm standby problem, and it compounds silently until an incident forces it into view.

The Autonomy Dial: Five Levels for Shipping AI Features Without Betting the Company

· 11 min read
Tian Pan
Software Engineer

Most teams shipping AI features make the same mistake: they jump straight from "prototype that impressed the VP" to "fully autonomous in production." Then something goes wrong — a bad recommendation, an incorrect auto-response, a financial transaction that should never have been approved — and the entire feature gets pulled. Not dialed back. Pulled.

The problem is not that AI autonomy is dangerous. The problem is that most teams treat autonomy as a binary switch — off or on — when it should be a dial with distinct, instrumented positions between those two extremes.

The Escalation Protocol: Building Agent-to-Human Handoffs That Don't Lose State

· 11 min read
Tian Pan
Software Engineer

When a support agent receives an AI-to-human handoff with a raw chat transcript, the average time to prepare for resolution is 15 minutes. The agent has to find the customer in the CRM, look up the relevant order, calculate purchase dates, and reconstruct what the AI already determined. When the same handoff arrives as a structured payload — action history, retrieved data, the exact ambiguity that triggered escalation — that prep time drops to 30 seconds.

That 97% reduction in manual work isn't an edge case. It's the difference between escalation protocols that actually support human oversight and ones that just dump context onto whoever happens to be on shift.

Designing Approval Gates for Autonomous AI Agents

· 10 min read
Tian Pan
Software Engineer

Most agent failures aren't explosions. They're quiet. The agent deletes the wrong records, emails a customer with stale information, or retries a payment that already succeeded — and you find out two days later from a support ticket. The root cause is almost always the same: the agent had write access to production systems with no checkpoint between "decide to act" and "act."

Approval gates are the engineering answer to this. Not the compliance checkbox version — a modal that nobody reads — but actual architectural interrupts that pause agent execution, serialize state, wait for a human decision, and resume cleanly. Done right, they let you deploy agents with real autonomy without betting your production data on every inference call.

Measuring AI Agent Autonomy in Production: What the Data Actually Shows

· 7 min read
Tian Pan
Software Engineer

Most teams building AI agents spend weeks on pre-deployment evals and almost nothing on measuring what their agents actually do in production. That's backwards. The metrics that matter—how long agents run unsupervised, how often they ask for help, how much risk they take on—only emerge at runtime, across thousands of real sessions. Without measuring these, you're flying blind.

A large-scale study of production agent behavior across thousands of deployments and software engineering sessions has surfaced some genuinely counterintuitive findings. The picture that emerges is not the one most builders expect.