Skip to main content

173 posts tagged with "ai-agents"

View all tags

Ambient AI Design: When the Chat Interface Is the Wrong Abstraction

· 8 min read
Tian Pan
Software Engineer

Most engineering teams default to building AI features as chat interfaces. A user types something; the model responds. The pattern feels natural because it maps to human conversation, and the tooling makes it easy. But when you watch those chat-based AI features in production, you often see the same dysfunction: the UI sits idle, waiting for a user who is too busy, too distracted, or simply unaware that they should be asking something.

Chat is a pull model. The user initiates. The AI reacts. For a meaningful subset of the valuable AI work in any product—monitoring, anomaly detection, workflow automation, proactive notification—pull is the wrong shape. The work needs to happen whether or not the user remembered to open the chat window.

Silent Async Agent Failures: Why Your AI Jobs Die Without Anyone Noticing

· 8 min read
Tian Pan
Software Engineer

Async AI jobs have a problem that traditional background workers don't: they fail silently and confidently. A document processing agent returns HTTP 200, logs a well-formatted result, and moves on — while the actual output is subtly wrong, partially complete, or based on a hallucinated fact three steps back. Your dashboards stay green. Your on-call engineer sleeps through it. Your customers eventually notice.

This is not an edge case. It's the default behavior of async AI systems that haven't been deliberately designed for observability. The tools that keep background job queues reliable in conventional distributed systems — dead letter queues, idempotency keys, saga logs — also work for AI agents. But the failure modes are different enough that they require some translation.

The CAP Theorem for AI Agents: Why Your Agent Fails Completely When It Should Degrade Gracefully

· 9 min read
Tian Pan
Software Engineer

Your AI agent works perfectly until it doesn't. One tool goes down — maybe the search API is rate-limited, maybe the database is slow, maybe the code execution sandbox times out — and the entire agent collapses. Not a partial answer, not a degraded response. A complete failure. A blank screen or a hallucinated mess.

This is not a bug. It is a design choice, and almost nobody made it deliberately. The agent architectures we are building today implicitly choose "fail completely" because nobody designed the partial-availability path. If you have built distributed systems before, this pattern should feel painfully familiar. It is the CAP theorem, showing up in a new disguise.

Cascading Context Corruption: Why One Wrong Fact Derails Your Entire Agent Run

· 8 min read
Tian Pan
Software Engineer

Your agent completes a 25-step research task. The final report looks polished, citations check out, and the reasoning chain appears coherent. Except the agent hallucinated a company's founding year in step 3, and every subsequent inference — market timing analysis, competitive positioning, growth trajectory — built on that wrong date. The output is confidently, systematically wrong, and nothing in your pipeline caught it.

This is cascading context corruption: a single incorrect intermediate conclusion that propagates through subsequent reasoning steps and tool calls, compounding into system-wide failure. It is the most dangerous failure mode in long-running agents — because it looks like success.

The Institutional Knowledge Drain: How AI Agents Absorb Decisions Without Transferring Understanding

· 10 min read
Tian Pan
Software Engineer

Three months after a fintech team rolled out an AI coding agent to handle their routine backend tasks, a senior engineer left for another company. When the team tried to reconstruct why certain authentication decisions had been made six weeks earlier, nobody could. The PR descriptions said "implemented as discussed." The commit messages said "per requirements." The AI agent had made the choices, the code worked, and the reasoning had evaporated.

This is not a documentation failure. It is what happens when the channel through which understanding normally flows — the back-and-forth between engineers, the friction of explanation, the pressure of justifying a decision to another human — is replaced by a system that optimizes for output rather than comprehension.

MCP Is the New Microservices: The AI Tool Ecosystem Is Repeating Distributed Systems Mistakes

· 8 min read
Tian Pan
Software Engineer

If you lived through the microservices explosion of 2015–2018, the current state of MCP should feel uncomfortably familiar. A genuinely useful protocol appears. It's easy to spin up. Every team spins one up. Nobody tracks what's running, who owns it, or how it's secured. Within eighteen months, you're staring at a dependency graph that engineers privately call "the Death Star."

The Model Context Protocol is following the same trajectory, at roughly three times the speed. Unofficial registries already index over 16,000 MCP servers. GitHub hosts north of 20,000 public repositories implementing them. And Gartner is predicting that 40% of agentic AI projects will fail by 2027 — not because the technology doesn't work, but because organizations are automating broken processes. MCP sprawl is a symptom of exactly that problem.

Phantom Tool Calls: When AI Agents Invoke Tools That Don't Exist

· 8 min read
Tian Pan
Software Engineer

Your agent passes every unit test, handles the happy path beautifully, and then one Tuesday afternoon it tries to call get_user_preferences_v2 — a function that has never existed in your codebase. The call looks syntactically perfect. The parameters are reasonable. The only problem: your agent fabricated the entire thing.

This is the phantom tool call — a hallucination that doesn't manifest as wrong text but as a wrong action. Unlike a hallucinated fact that a human might catch during review, a phantom tool call hits your runtime, throws a cryptic ToolNotFoundError, and derails a multi-step workflow that was otherwise running fine.

When Your Database Migration Breaks Your AI Agent's World Model

· 9 min read
Tian Pan
Software Engineer

Your team ships a routine database migration on Tuesday — renaming last_login_date to last_activity_ts and expanding its semantics to include API calls. No service breaks. Tests pass. Dashboards update. But your AI agent, the one answering customer questions about user engagement, silently starts generating wrong answers. No error, no alert, no stack trace. It just confidently reasons over a world that no longer exists.

This is the schema migration problem that almost nobody in AI engineering has mapped. Your agent builds an implicit model of your data from tool descriptions, few-shot examples, and retrieval context. When the underlying schema changes, that model becomes a lie — and the agent has no mechanism to detect the contradiction.

The Anthropomorphism Tax: Why Treating Your Agent Like a Colleague Breaks Production Systems

· 10 min read
Tian Pan
Software Engineer

An engineering team builds an agent to process customer requests. It works beautifully in demos. They deploy it. Three weeks later, it has quietly been telling users incorrect information with full confidence, skipping steps when context gets long, and occasionally looping forever on ambiguous inputs. The postmortem reveals the team never built retry logic, never validated outputs, and never defined what the agent should do when it was uncertain. When asked why, the answer is revealing: "We figured it would handle those edge cases."

That phrase — "we figured it would handle those edge cases" — is the anthropomorphism tax made explicit. The team designed the system the way you'd manage a junior developer: brief them, trust their judgment, correct when they raise a hand. LLM agents don't raise a hand. They generate the next token.

The Context Window Cliff: What Actually Happens When Your Agent Hits the Limit Mid-Task

· 9 min read
Tian Pan
Software Engineer

Your agent completes steps one through six flawlessly. Step seven contradicts step two. Step eight hallucinates a tool that doesn't exist. Step nine confidently submits garbage. Nothing crashed. No error was thrown. The agent simply forgot what it was doing — and kept going anyway.

This is the context window cliff: the moment an AI agent's accumulated context exceeds its effective reasoning capacity. It doesn't fail gracefully. It doesn't ask for help. It makes confidently wrong decisions based on partial information, and you won't know until the damage is done.

The Enterprise API Impedance Mismatch: Why Your AI Agent Wastes 60% of Its Tokens Before Doing Anything Useful

· 8 min read
Tian Pan
Software Engineer

Your AI agent is brilliant at reasoning, planning, and generating natural language. Then you point it at your enterprise SAP endpoint and it spends 4,000 tokens trying to understand a SOAP envelope. Welcome to the impedance mismatch — the quiet tax that turns every enterprise AI integration into a token bonfire.

The mismatch isn't just about XML versus JSON. It's a fundamental collision between how LLMs think — natural language, flat key-value structures, concise context — and how enterprise systems communicate: deeply nested schemas, implementation-specific naming, pagination cursors, and decades of accumulated protocol conventions. Unlike a human developer who reads WSDL documentation once and moves on, your agent re-parses that complexity on every single invocation.

The Warm Standby Problem: Why Your AI Override Button Isn't a Safety Net

· 11 min read
Tian Pan
Software Engineer

Most teams building AI agents are designing for success. They instrument success rates, celebrate when the agent handles 90% of tickets autonomously, and put a "click here to override" button in the corner of the UI for the remaining 10%. Then they move on.

The button is not a safety net. It is a liability dressed as a feature.

The failure mode is not the agent breaking. It's the human nominally in charge not being able to take over when it does. The AI absorbed the task gradually — one workflow at a time, one edge case at a time — until the operator who used to handle it has not touched it in six months, has lost the context, and is being handed a live situation they are no longer equipped to manage. This is the warm standby problem, and it compounds silently until an incident forces it into view.