Skip to main content

3 posts tagged with "multi-agent-systems"

View all tags

The Agentic Deadlock: When AI Agents Wait for Each Other Forever

· 9 min read
Tian Pan
Software Engineer

Here is an uncomfortable fact about multi-agent AI systems: when you let two or more LLM-powered agents share resources and make decisions concurrently, they deadlock at rates between 25% and 95%. Not occasionally. Not under edge-case load. Under normal operating conditions with standard prompting, the moment agents must coordinate simultaneously, the system seizes up.

This is not a theoretical concern. Coordination breakdowns account for roughly 37% of multi-agent system failures in production, and systems without formal orchestration experience failure rates between 41% and 87%. The classic distributed systems failure modes — deadlock, livelock, priority inversion — are back, and they are wearing new clothes.

Conway's Law for AI Systems: Your Org Chart Is Already Your Agent Architecture

· 9 min read
Tian Pan
Software Engineer

Every company shipping multi-agent systems eventually discovers the same uncomfortable truth: their agents don't reflect their architecture diagrams. They reflect their org charts.

The agent that handles customer onboarding doesn't coordinate well with the agent that manages billing — not because of a technical limitation, but because the teams that built them don't talk to each other either.

Conway's Law — the observation that systems mirror the communication structures of the organizations that build them — is fifty years old and has never been more relevant. In the era of agentic AI, the law doesn't just apply. It intensifies.

When your "system" is a network of autonomous agents making decisions, every organizational seam becomes a potential failure point where context is lost, handoffs break, and agents optimize for local metrics that conflict with each other.

Deep Research Agents: Why Most Implementations Loop Forever or Stop Too Early

· 10 min read
Tian Pan
Software Engineer

Standard LLMs without iterative retrieval score below 10% on multi-step web research benchmarks. Deep research agents — systems that search, read, synthesize, and re-query in a loop — score above 50%. That five-fold improvement explains why every serious AI product team is building one. What it doesn't explain is why most of those implementations either run up a $15 bill chasing irrelevant tangents or declare victory after two shallow searches.

The core problem isn't building the loop. It's knowing when the loop should stop. And that turns out to be a surprisingly deep systems design challenge that touches convergence detection, cost economics, source reliability, and multi-agent coordination.