Skip to main content

39 posts tagged with "distributed-systems"

View all tags

Retry Amplification: How a 2% Tool Error Rate Becomes a 20% Agent Failure

· 13 min read
Tian Pan
Software Engineer

The spreadsheet on the oncall doc said the search tool had a 2% error rate. The incident review said the agent platform had a 20% failure rate during the three-hour window. Nobody disagreed with either number. The search team was not at fault. The platform team did not ship a bug. The gap between the two numbers is the whole story, and it is a story about arithmetic, not engineering incompetence.

Retry logic is one of the most borrowed and least adapted patterns in agent systems. Teams copy tenacity decorators from their REST client, stack them at the SDK, the gateway, and the agent loop, and ship. Each layer is individually reasonable. The composition is a siege weapon pointed at the flakiest dependency in the fleet, and it fires hardest at the exact moment that dependency needs the load to drop.

This post is about how that math works, why agent loops amplify it harder than request-response systems, and the retry discipline that keeps transient blips from becoming correlated outages with your own logo on them.

Agent Fleet Concurrency: Coordinating Dozens of Agents Without Deadlock or the Thundering Herd

· 12 min read
Tian Pan
Software Engineer

Eleven agents started at the same second. Three died before the first tool call returned. That 27% fatality rate was not a model problem, a prompt problem, or a tool problem. It was a scheduling problem — the same kind of problem an operating system solves when fifty processes wake up at once and fight over a single CPU. The difference is that the OS has forty years of accumulated wisdom and the agent runtime has about two.

Anyone who has wired up more than a handful of concurrent LLM workers has seen some version of this. You kick off a scheduled job at 02:00, thirty agents spin up, they all hit the same provider within 200 ms of each other, and most of them fail with a mix of 429s, 502s, and connection resets. The survivors get half the rate budget they were promised because the provider's fair-share logic has already started throttling your API key. By 02:05 the surviving agents finish and your dashboard shows a completion rate that would embarrass a first-year CS student writing their first producer-consumer. Your on-call rotation debates whether to add retries, add a queue, or just run fewer of them.

None of those are the right answer by themselves. The right answer is that a fleet of agents is a small distributed system and needs to be designed like one.

The Data Rollback Problem: Undoing What Your AI Agent Wrote to Production

· 10 min read
Tian Pan
Software Engineer

During a live executive demo, an AI coding agent deleted an entire production database. The fix wasn't a clever rollback script — it was a four-hour database restore from backups. The company had given an AI agent unrestricted SQL execution in production, and when the agent "panicked" (its word, not a metaphor), it executed DROP TABLE with no confirmation gate. Over 1,200 executives and 1,190 companies lost data. That incident wasn't an edge case. It was a preview.

As AI agents take on more write-heavy operations — updating records, processing transactions, modifying customer state — the question of how to undo their mistakes becomes load-bearing infrastructure. The problem is that "rollback" as engineers understand it from relational databases does not translate cleanly to agentic systems. The standard tools break in three specific ways that are worth understanding before your first agent incident.

The CAP Theorem for AI Agents: Choosing Consistency or Availability When Your LLM Is the Bottleneck

· 10 min read
Tian Pan
Software Engineer

Every engineer who has shipped a distributed system has stared at the CAP theorem and made a choice: when the network partitions, do you keep serving stale data (availability) or do you refuse to serve until you have a consistent answer (consistency)? The theorem tells you that you cannot have both.

AI agents face an identical tradeoff, and almost nobody is making it explicitly. When your LLM call times out, when a tool returns garbage, when a downstream API is unavailable — what does your agent do? In most production systems, the answer is: it guesses. Quietly. Confidently. And often wrong.

The failure mode isn't dramatic. There's no exception in the logs. The agent "answered" the user. You only find out two weeks later when someone asks why the system booked the wrong flight, extracted the wrong entity, or confidently told a customer a price that no longer exists.

Idempotency Is Not Optional in LLM Pipelines

· 10 min read
Tian Pan
Software Engineer

A batch inference job finishes after six minutes. The network hiccups on the response. Your retry logic kicks in. Two minutes later the job finishes again — and your invoice doubles. This is the tamest version of what happens when you apply traditional idempotency thinking to LLM pipelines without adapting it to stochastic systems.

Most production teams discover the problem the hard way: a retry that was supposed to recover from a transient error triggers a second payment, sends a duplicate email, or writes a contradictory record to the database. The fix is not better retry logic — it is a different mental model of what idempotency even means when your core component is probabilistic.

The Silent Corruption Problem in Parallel Agent Systems

· 12 min read
Tian Pan
Software Engineer

When a multi-agent system starts behaving strangely — giving inconsistent answers, losing track of tasks, making decisions that contradict earlier reasoning — the instinct is to blame the model. Tweak the prompt. Switch to a stronger model. Add more context.

The actual cause is often more mundane and more dangerous: shared state corruption from concurrent writes. Two agents read the same memory, both compute updates, and one silently overwrites the other. The resulting state is technically valid — no exceptions thrown, no schema violations — but semantically wrong. Every agent that reads it afterward reasons correctly over incorrect information.

This failure mode is invisible at the individual operation level, hard to reproduce in test environments, and nearly impossible to distinguish from model error by looking at outputs alone. O'Reilly's 2025 research on multi-agent memory engineering found that 36.9% of multi-agent system failures stem from interagent misalignment — agents operating on inconsistent views of shared information. It's not a theoretical concern.

The Share-Nothing Agent: Designing AI Agents for Horizontal Scalability

· 11 min read
Tian Pan
Software Engineer

Your load balancer assigns an incoming agent request to replica 3. But the user's conversation history lives in memory on replica 7. Replica 3 has no idea what has happened in the last six turns, so it starts over, confuses the user, and your on-call engineer gets paged at 2 AM. You add sticky sessions. Now all requests for that user route to replica 7 forever. You've traded a correctness bug for a scalability ceiling.

This is the moment teams realize that "horizontal scaling" for AI agents is not the same problem as horizontal scaling for web servers. The fixes are different, and the naive paths fail in predictable ways.

The Cascade Problem: Why Agent Side Effects Explode at Scale

· 12 min read
Tian Pan
Software Engineer

A team ships a document-processing agent. It works flawlessly in development: reads files, extracts data, writes results to a database, sends a confirmation webhook. They run 50 test cases. All pass.

Two weeks after deployment, with a hundred concurrent agent instances running, the database has 40,000 duplicate records, three downstream services have received thousands of spurious webhooks, and a shared configuration file has been half-overwritten by two agents that ran simultaneously.

The agent didn't break. The system broke because no individual agent test ever had to share the world with another agent.

Distributed Tracing Across Agent Service Boundaries: The Context Propagation Gap

· 11 min read
Tian Pan
Software Engineer

Most distributed tracing setups work fine until you add agents. The moment your system has Agent A spawning Agent B across a microservice boundary—Agent B calling a tool server, that tool server fetching from a vector database—the coherent end-to-end view shatters into disconnected fragments. Your tracing backend shows individual operations, but you've lost the causal chain that tells you why something happened, which user request triggered it, and where in the pipeline 800 milliseconds went.

This isn't a monitoring configuration problem. It's a context propagation architecture problem, and it has a specific technical shape that most teams discover the hard way.

The Idempotency Problem in Agentic Tool Calling

· 11 min read
Tian Pan
Software Engineer

The scenario plays out the same way every time. Your agent is booking a hotel room, and a network timeout occurs right after the payment API call returns 200 but before the confirmation is stored. The agent framework retries. The payment runs again. The customer is charged twice, support escalates, and someone senior says the AI "hallucinated a double charge" — which is wrong but feels right because nobody wants to say their retry logic was broken from the start.

This isn't an AI problem. It's a distributed systems problem that the AI layer imported wholesale, without the decades of hard-won patterns that distributed systems engineers developed to handle it. Standard agent retry logic assumes operations are idempotent. Most tool calls are not.

The Idempotency Crisis: LLM Agents as Event Stream Consumers

· 11 min read
Tian Pan
Software Engineer

Every event streaming system eventually delivers the same message twice. Network hiccups, broker restarts, offset commit failures — at-least-once delivery is not a bug; it's the contract. Traditional consumers handle this gracefully because they're deterministic: process the same event twice, get the same result, write the same record. The second write is a no-op.

LLMs are not deterministic processors. The same prompt with the same input produces different outputs on each run. Even with temperature=0, floating-point arithmetic, batch composition effects, and hardware scheduling variations introduce variance. Research measuring "deterministic" LLM settings found accuracy differences up to 15% across naturally occurring runs, with best-to-worst performance gaps reaching 70%. At-least-once delivery plus a non-deterministic processor does not give you at-most-once behavior. It gives you unpredictable behavior — and that's a crisis waiting to happen in production.

Deadline Propagation in Agent Chains: What Happens to Your p95 SLO at Hop Three

· 10 min read
Tian Pan
Software Engineer

Most engineers building multi-step agent pipelines discover the same problem about two weeks after their first production incident: they set a 5-second timeout on their API gateway, their agent pipeline has four hops, and the system behaves as though there is no timeout at all. The agent at hop three doesn't know the upstream caller gave up three seconds ago. It keeps running, keeps calling tools, keeps generating tokens—and the user is already gone.

This isn't a configuration mistake. It's a structural problem. Latency constraints don't propagate across agent boundaries by default, and none of the major orchestration frameworks make deadline propagation easy. The result is a class of failures that looks like latency problems but is actually a context propagation problem.