Skip to main content

2 posts tagged with "agent-orchestration"

View all tags

The Composition Testing Gap: Why Your Agents Pass Every Test but Fail Together

· 9 min read
Tian Pan
Software Engineer

Your planner agent passes its eval suite at 94%. Your researcher agent scores even higher. Your synthesizer agent nails every benchmark you throw at it. You compose them into a pipeline, deploy to production, and watch it produce confidently wrong answers that no individual agent would ever generate on its own.

This is the composition testing gap — the systematic blind spot where individually validated agents fail in ways that no single-agent analysis can predict. Research on multi-agent LLM systems shows that 67% of production failures stem from inter-agent interactions rather than individual agent defects. You're testing the atoms but shipping the molecule, and molecular behavior is not the sum of atomic properties.

DAG-First Agent Orchestration: Why Linear Chains Break at Scale

· 10 min read
Tian Pan
Software Engineer

Most multi-agent systems start as a chain. Agent A calls Agent B, B calls C, C calls D. It works fine in demos, and it works fine with five agents on toy tasks. Then you add a sixth agent, a seventh, and the pipeline that once ran in eight seconds starts taking forty. You add a retry on step three, and now failures on step three silently cascade into corrupted state at step six. You try to add a parallel branch and discover your framework was never designed for that.

The problem is not the number of agents. The problem is the execution model. Linear chains serialize inherently parallel work, propagate failures in only one direction, and make partial recovery structurally impossible. The fix is not adding more infrastructure on top — it is rebuilding the execution model around a directed acyclic graph from the start.