Skip to main content

41 posts tagged with "distributed-systems"

View all tags

The Idempotency Crisis: LLM Agents as Event Stream Consumers

· 11 min read
Tian Pan
Software Engineer

Every event streaming system eventually delivers the same message twice. Network hiccups, broker restarts, offset commit failures — at-least-once delivery is not a bug; it's the contract. Traditional consumers handle this gracefully because they're deterministic: process the same event twice, get the same result, write the same record. The second write is a no-op.

LLMs are not deterministic processors. The same prompt with the same input produces different outputs on each run. Even with temperature=0, floating-point arithmetic, batch composition effects, and hardware scheduling variations introduce variance. Research measuring "deterministic" LLM settings found accuracy differences up to 15% across naturally occurring runs, with best-to-worst performance gaps reaching 70%. At-least-once delivery plus a non-deterministic processor does not give you at-most-once behavior. It gives you unpredictable behavior — and that's a crisis waiting to happen in production.

Deadline Propagation in Agent Chains: What Happens to Your p95 SLO at Hop Three

· 10 min read
Tian Pan
Software Engineer

Most engineers building multi-step agent pipelines discover the same problem about two weeks after their first production incident: they set a 5-second timeout on their API gateway, their agent pipeline has four hops, and the system behaves as though there is no timeout at all. The agent at hop three doesn't know the upstream caller gave up three seconds ago. It keeps running, keeps calling tools, keeps generating tokens—and the user is already gone.

This isn't a configuration mistake. It's a structural problem. Latency constraints don't propagate across agent boundaries by default, and none of the major orchestration frameworks make deadline propagation easy. The result is a class of failures that looks like latency problems but is actually a context propagation problem.

LLM Rate Limits Are a Distributed Systems Problem

· 11 min read
Tian Pan
Software Engineer

Your AI product has two surfaces: a user-facing chat feature and a background report generation job. Both call the same LLM API under the same key. One afternoon, a support ticket arrives: "Chat responses are getting cut off halfway." No alerts fired. No 429s in the logs. The API was returning HTTP 200 the entire time.

What happened: the report generation job gradually consumed most of your shared token quota. Chat requests started completing, but only up to your max_tokens limit — semantically truncated, syntactically valid, silently wrong. Your standard monitoring never noticed because there was nothing to notice at the HTTP layer.

This is not an edge case. It is what happens when engineers treat LLM rate limits as a simple throttle problem instead of recognizing the class of distributed systems failure they actually are.

Multi-Region LLM Serving: The Cache Locality Problem Nobody Warns You About

· 10 min read
Tian Pan
Software Engineer

When you run a stateless HTTP API across multiple regions, the routing problem is essentially solved. Put a global load balancer in front, distribute requests by geography, and the worst thing that happens is a slightly stale cache entry. Any replica can serve any request with identical results.

LLM inference breaks every one of these assumptions. The moment you add prompt caching — which you will, because the cost difference between a cache hit and a cache miss is roughly 10x — your service becomes stateful in ways that most infrastructure teams don't anticipate until they're staring at degraded latency numbers in their second region.

Multi-User Shared AI Sessions: The Concurrency Problem Nobody Has Solved

· 12 min read
Tian Pan
Software Engineer

Most AI products are built for a single user with a single intent, a single conversation thread, and a single identity. This works well enough when the product is a personal productivity tool—a writing assistant, a code completion engine, a summarizer. But something happens when teams start using AI collaboratively: the product silently breaks in ways that are hard to diagnose and harder to fix. Two users prompt the AI simultaneously, and one of their inputs disappears. A context window shared across five engineers fills up with duplicated history. The AI responds to user A's question using user B's permissions. Nobody designed for any of this, because shipping multi-user shared context means confronting one of the hardest distributed systems problems in modern AI infrastructure.

This post is about what actually makes simultaneous multi-user AI sessions hard, what production teams have tried, and what the emerging architectural patterns are. If you are building a collaborative AI feature and wondering why it feels impossibly complex, this is why.

When Your AI Agent Consumes from Kafka: The Design Assumptions That Break

· 11 min read
Tian Pan
Software Engineer

The standard mental model for AI agents assumes HTTP: a client sends a request, the agent processes it, returns a response. Clean, synchronous, easy to reason about. When an LLM-powered function fails, you get an error code. When it succeeds, you move on.

Once you swap that HTTP interface for a Kafka topic or SQS queue, every one of those assumptions starts to crack. The queue guarantees at-least-once delivery. Your agent is stochastic. That combination produces failure modes that don't exist in deterministic systems—and the fixes aren't the same ones that work for traditional microservices.

This post covers what actually changes when AI agents consume from message queues: idempotency, ordering, backpressure, dead-letter handling, and the specific failure mode where a replayed message triggers different agent behavior the second time around.

Multi-User Shared Agent State: The Concurrency Primitives You Actually Need

· 11 min read
Tian Pan
Software Engineer

Every agent tutorial starts with a single user, a single session, and a single context window. The agent reads state, reasons, acts, writes back. Clean. Deterministic. Completely wrong for anything teams actually use.

Real collaborative products—shared planning boards, multi-user support queues, document co-pilots, team project assistants—require multiple users to interact with the same agent simultaneously. When two people give the agent contradictory instructions within the same second, one of their changes disappears. The agent doesn't tell them. It doesn't even know it happened.

This is the multi-user shared agent state problem, and it's a distributed systems problem dressed in an AI costume.

Proactive Agents: Event-Driven and Scheduled Automation for Background AI

· 11 min read
Tian Pan
Software Engineer

Almost every tutorial on building AI agents starts the same way: user types a message, agent reasons, agent responds. That model works fine for chatbots and copilots. It fails to describe the majority of production AI work that organizations are now deploying.

The agents that quietly matter most in enterprise environments don't wait for a message. They wake up when a database row changes, when a queue crosses a depth threshold, when a scheduled cron fires at 3 AM, or when monitoring detects that a metric drifted outside bounds. They act without a user present. When they fail, nobody notices until the damage has compounded.

Building these proactive agents requires a substantially different design vocabulary than building reactive assistants. The session-scoped mental model that works for conversational AI breaks down when your agent runs in a loop, retries in the background, and has no human to catch its mistakes.

Write Amplification in Agentic Systems: Why One Tool Call Hits Six Databases

· 10 min read
Tian Pan
Software Engineer

When an agent decides to remember something — "the user prefers email over Slack" — it looks like a single write. In practice, it is six writes: a new embedding in the vector store, a row in the relational database, an entry in the session cache, a record in the event log, an entry in the audit trail, and an update to the context store. Each one happens because a different part of the system has a legitimate need for the data, and each one introduces a new failure surface.

This is write amplification at the infrastructure layer, and it's one of the quieter operational crises in production agent deployments. It does not cause dramatic failures. It causes partial failures: the user's preference is searchable semantically but the relational query returns stale data; the audit log shows an action that never fully completed; the cache is warm but the context store wasn't updated, so the next session starts without the learned pattern.

Understanding why this happens — and what to do about it — requires borrowing from database internals rather than the agent framework documentation.

Silent Async Agent Failures: Why Your AI Jobs Die Without Anyone Noticing

· 8 min read
Tian Pan
Software Engineer

Async AI jobs have a problem that traditional background workers don't: they fail silently and confidently. A document processing agent returns HTTP 200, logs a well-formatted result, and moves on — while the actual output is subtly wrong, partially complete, or based on a hallucinated fact three steps back. Your dashboards stay green. Your on-call engineer sleeps through it. Your customers eventually notice.

This is not an edge case. It's the default behavior of async AI systems that haven't been deliberately designed for observability. The tools that keep background job queues reliable in conventional distributed systems — dead letter queues, idempotency keys, saga logs — also work for AI agents. But the failure modes are different enough that they require some translation.

Zero-Downtime AI Deployments: It's a Distributed Systems Problem

· 10 min read
Tian Pan
Software Engineer

In April 2025, OpenAI shipped a system prompt update to GPT-4o. Within hours, 180 million users noticed ChatGPT had become obsequiously flattering. The failure wasn't caught by monitoring. It was caught by Twitter. Rollback took three days.

That incident revealed something the AI industry had been quietly avoiding: prompt changes are production deployments. And most teams treat them like config file edits.

The core problem with AI deployments is that you're not deploying one thing — you're deploying four: model weights, prompt text, tool schemas, and the context structure they all assume. Each can drift independently. Each can be partially rolled out. And unlike a broken API endpoint, AI failures are often probabilistic, gradual, and invisible until they've already affected a large fraction of your traffic.

This is the distributed systems consistency problem, wearing an AI hat.

The CAP Theorem for AI Agents: Why Your Agent Fails Completely When It Should Degrade Gracefully

· 9 min read
Tian Pan
Software Engineer

Your AI agent works perfectly until it doesn't. One tool goes down — maybe the search API is rate-limited, maybe the database is slow, maybe the code execution sandbox times out — and the entire agent collapses. Not a partial answer, not a degraded response. A complete failure. A blank screen or a hallucinated mess.

This is not a bug. It is a design choice, and almost nobody made it deliberately. The agent architectures we are building today implicitly choose "fail completely" because nobody designed the partial-availability path. If you have built distributed systems before, this pattern should feel painfully familiar. It is the CAP theorem, showing up in a new disguise.