Skip to main content

5 posts tagged with "agentic-ai"

View all tags

The Co-Pilot Trap: Why Full Autopilot Ships Faster but Fails Harder

· 9 min read
Tian Pan
Software Engineer

There's a pattern in how AI features die in production: they start as copilots and get promoted to autopilots. The promotion happens for obvious reasons—cost reduction, scale, reduced headcount—and the reasoning sounds solid at demo time. Then the edge cases accumulate. A user-facing recommendation becomes a user-facing decision. A suggestion becomes an action. And when the first systematic failure lands, the engineering team discovers that the error tolerance assumptions baked into the original design were never re-evaluated.

This is the co-pilot trap: building an AI feature for one tier of the automation spectrum, then promoting it to a higher tier without rebuilding the failure model that tier requires.

Trust Ceilings: The Autonomy Variable Your Product Team Can't See

· 10 min read
Tian Pan
Software Engineer

Every agentic feature has a maximum autonomy level above which users start checking work, intervening, or abandoning the feature entirely. That maximum is not a property of your model. It is a property of your users, your domain, and the cost of being wrong, and it does not move because a launch deck says it should. Most teams discover their ceiling the hard way: a feature ships designed for full autonomy, adoption stalls at "agent suggests, human approves," the metrics blame the model, and the next quarter is spent tuning a knob that was never the bottleneck.

The shape of the ceiling is consistent enough across products that it deserves a name. Anthropic's own usage data on Claude Code shows new users using full auto-approve about 20% of the time, climbing past 40% only after roughly 750 sessions. PwC's 2025 survey of 300 senior executives found 79% of companies are using AI agents, but most production deployments operate at "collaborator" or "consultant" levels — the model proposes, the human disposes — not at the fully autonomous tier the marketing implied. The story underneath those numbers is not that users are timid. It is that trust is calibrated to the cost of a recoverable mistake, and your product almost certainly does not let users see, undo, or bound that cost the way they need to.

Decision Provenance in Agentic Systems: Audit Trails That Actually Work

· 13 min read
Tian Pan
Software Engineer

An agent running in your production system deletes 10,000 database records. The deletion matches valid business logic — the records were flagged correctly. But three months later, a regulator asks a simple question: who authorized this, and on what basis did the agent decide? You open your logs. You find the SQL statement. You find the timestamp. You find nothing else.

This is the decision provenance problem. You can prove that your agent acted; you cannot prove why, or whether that action was ever sanctioned by a human who understood what they were approving. With autonomous agents now executing workflows that span hours, dozens of tool calls, and decisions with real-world consequences, the gap between "we have logs" and "we have accountability" has become operationally dangerous.

Designing Approval Gates for Autonomous AI Agents

· 10 min read
Tian Pan
Software Engineer

Most agent failures aren't explosions. They're quiet. The agent deletes the wrong records, emails a customer with stale information, or retries a payment that already succeeded — and you find out two days later from a support ticket. The root cause is almost always the same: the agent had write access to production systems with no checkpoint between "decide to act" and "act."

Approval gates are the engineering answer to this. Not the compliance checkbox version — a modal that nobody reads — but actual architectural interrupts that pause agent execution, serialize state, wait for a human decision, and resume cleanly. Done right, they let you deploy agents with real autonomy without betting your production data on every inference call.

Governing Agentic AI Systems: What Changes When Your AI Can Act

· 9 min read
Tian Pan
Software Engineer

For most of AI's history, the governance problem was fundamentally about outputs: a model says something wrong, offensive, or confidential. That's bad, but it's contained. The blast radius is limited to whoever reads the output.

Agentic AI breaks this assumption entirely. When an agent can call APIs, write to databases, send emails, and spawn sub-agents — the question is no longer just "what did it say?" but "what did it do, to what systems, on whose behalf, and can we undo it?" Nearly 70% of enterprises already run agents in production, but most of those agents operate outside traditional identity and access management controls, making them invisible, overprivileged, and unaudited.