Skip to main content

45 posts tagged with "distributed-systems"

View all tags

Write Amplification in Agentic Systems: Why One Tool Call Hits Six Databases

· 10 min read
Tian Pan
Software Engineer

When an agent decides to remember something — "the user prefers email over Slack" — it looks like a single write. In practice, it is six writes: a new embedding in the vector store, a row in the relational database, an entry in the session cache, a record in the event log, an entry in the audit trail, and an update to the context store. Each one happens because a different part of the system has a legitimate need for the data, and each one introduces a new failure surface.

This is write amplification at the infrastructure layer, and it's one of the quieter operational crises in production agent deployments. It does not cause dramatic failures. It causes partial failures: the user's preference is searchable semantically but the relational query returns stale data; the audit log shows an action that never fully completed; the cache is warm but the context store wasn't updated, so the next session starts without the learned pattern.

Understanding why this happens — and what to do about it — requires borrowing from database internals rather than the agent framework documentation.

Silent Async Agent Failures: Why Your AI Jobs Die Without Anyone Noticing

· 9 min read
Tian Pan
Software Engineer

Async AI jobs have a problem that traditional background workers don't: they fail silently and confidently. A document processing agent returns HTTP 200, logs a well-formatted result, and moves on — while the actual output is subtly wrong, partially complete, or based on a hallucinated fact three steps back. Your dashboards stay green. Your on-call engineer sleeps through it. Your customers eventually notice.

This is not an edge case. It's the default behavior of async AI systems that haven't been deliberately designed for observability. The tools that keep background job queues reliable in conventional distributed systems — dead letter queues, idempotency keys, saga logs — also work for AI agents. But the failure modes are different enough that they require some translation.

Zero-Downtime AI Deployments: It's a Distributed Systems Problem

· 10 min read
Tian Pan
Software Engineer

In April 2025, OpenAI shipped a system prompt update to GPT-4o. Within hours, 180 million users noticed ChatGPT had become obsequiously flattering. The failure wasn't caught by monitoring. It was caught by Twitter. Rollback took three days.

That incident revealed something the AI industry had been quietly avoiding: prompt changes are production deployments. And most teams treat them like config file edits.

The core problem with AI deployments is that you're not deploying one thing — you're deploying four: model weights, prompt text, tool schemas, and the context structure they all assume. Each can drift independently. Each can be partially rolled out. And unlike a broken API endpoint, AI failures are often probabilistic, gradual, and invisible until they've already affected a large fraction of your traffic.

This is the distributed systems consistency problem, wearing an AI hat.

The CAP Theorem for AI Agents: Why Your Agent Fails Completely When It Should Degrade Gracefully

· 9 min read
Tian Pan
Software Engineer

Your AI agent works perfectly until it doesn't. One tool goes down — maybe the search API is rate-limited, maybe the database is slow, maybe the code execution sandbox times out — and the entire agent collapses. Not a partial answer, not a degraded response. A complete failure. A blank screen or a hallucinated mess.

This is not a bug. It is a design choice, and almost nobody made it deliberately. The agent architectures we are building today implicitly choose "fail completely" because nobody designed the partial-availability path. If you have built distributed systems before, this pattern should feel painfully familiar. It is the CAP theorem, showing up in a new disguise.

Cascading Context Corruption: Why One Wrong Fact Derails Your Entire Agent Run

· 8 min read
Tian Pan
Software Engineer

Your agent completes a 25-step research task. The final report looks polished, citations check out, and the reasoning chain appears coherent. Except the agent hallucinated a company's founding year in step 3, and every subsequent inference — market timing analysis, competitive positioning, growth trajectory — built on that wrong date. The output is confidently, systematically wrong, and nothing in your pipeline caught it.

This is cascading context corruption: a single incorrect intermediate conclusion that propagates through subsequent reasoning steps and tool calls, compounding into system-wide failure. It is the most dangerous failure mode in long-running agents — because it looks like success.

MCP Is the New Microservices: The AI Tool Ecosystem Is Repeating Distributed Systems Mistakes

· 8 min read
Tian Pan
Software Engineer

If you lived through the microservices explosion of 2015–2018, the current state of MCP should feel uncomfortably familiar. A genuinely useful protocol appears. It's easy to spin up. Every team spins one up. Nobody tracks what's running, who owns it, or how it's secured. Within eighteen months, you're staring at a dependency graph that engineers privately call "the Death Star."

The Model Context Protocol is following the same trajectory, at roughly three times the speed. Unofficial registries already index over 16,000 MCP servers. GitHub hosts north of 20,000 public repositories implementing them. And Gartner is predicting that 40% of agentic AI projects will fail by 2027 — not because the technology doesn't work, but because organizations are automating broken processes. MCP sprawl is a symptom of exactly that problem.

Treating Your LLM Provider as an Unreliable Upstream: The Distributed Systems Playbook for AI

· 11 min read
Tian Pan
Software Engineer

Your monitoring dashboard is green. Response times look fine. Error rates are near zero. And yet your users are filing tickets about garbage answers, your agent is making confidently wrong decisions, and your support queue is filling up with complaints that don't correlate with any infrastructure alert you have.

Welcome to the unique hell of depending on an LLM API in production. It's an upstream service that can fail you while returning a perfectly healthy 200 OK.

Debug Your AI Agent Like a Distributed System, Not a Program

· 9 min read
Tian Pan
Software Engineer

Your agent worked perfectly in development. It answered test queries, called the right tools, and produced clean outputs. Then it hit production, and something went wrong on step seven of a twelve-step workflow. Your logs show the final output was garbage, but you have no idea why.

You add print statements. You scatter logger.debug() calls through your orchestration code. You stare at thousands of lines of output and realize you're debugging a distributed system with single-process tools. That's the fundamental mistake most teams make with AI agents — they treat them like programs when they behave like distributed systems.

The Agentic Deadlock: When AI Agents Wait for Each Other Forever

· 9 min read
Tian Pan
Software Engineer

Here is an uncomfortable fact about multi-agent AI systems: when you let two or more LLM-powered agents share resources and make decisions concurrently, they deadlock at rates between 25% and 95%. Not occasionally. Not under edge-case load. Under normal operating conditions with standard prompting, the moment agents must coordinate simultaneously, the system seizes up.

This is not a theoretical concern. Coordination breakdowns account for roughly 37% of multi-agent system failures in production, and systems without formal orchestration experience failure rates between 41% and 87%. The classic distributed systems failure modes — deadlock, livelock, priority inversion — are back, and they are wearing new clothes.

Backpressure in Agent Pipelines: When AI Generates Work Faster Than It Can Execute

· 9 min read
Tian Pan
Software Engineer

A multi-agent research tool built on a popular open-source stack slipped into a recursive loop and ran for 11 days before anyone noticed. The bill: $47,000. Two agents had been talking to each other non-stop, burning tokens while the team assumed the system was working normally. This is what happens when an agent pipeline has no backpressure.

The problem is structural. When an orchestrator agent decomposes a task into sub-tasks and spawns sub-agents to handle each one, and those sub-agents can themselves spawn further sub-agents or fan out across multiple tool calls, you get exponential work generation. The pipeline produces work faster than it can execute, finish, or even account for. This is the same problem that reactive systems, streaming architectures, and network protocols solved decades ago — and the same solutions apply.

Consensus Protocols for Multi-Agent Decisions: What Happens When Your Agents Disagree

· 9 min read
Tian Pan
Software Engineer

You have three agents analyzing a customer support ticket. Two say "refund immediately," one says "escalate to fraud review." You pick the majority answer and ship the refund. Three days later, the fraud team asks why you auto-refunded a known chargeback pattern.

This is the consensus problem in multi-agent systems, and it turns out that distributed systems engineers solved important pieces of it decades ago. But naively transplanting those solutions — or worse, defaulting to majority vote — creates failure modes that are uniquely dangerous when your "nodes" are language models with opinions.

Race Conditions in Concurrent Agent Systems: The Bugs That Look Like Hallucinations

· 13 min read
Tian Pan
Software Engineer

Three agents processed a customer account update concurrently. All three logged success. The final database state was wrong in three different ways simultaneously, and no error was ever thrown. The team spent two weeks blaming the model.

It wasn't the model. It was a race condition.

This is the failure mode that gets misdiagnosed more than any other in production multi-agent systems: data corruption caused by concurrent state access, mistaken for hallucination because the downstream agents confidently reason over corrupted inputs. The model isn't making things up. It's faithfully processing garbage.