Context Poisoning in Long-Running AI Agents
Your agent completes step three of a twelve-step workflow and confidently reports that the target API returned a 200 status. It didn't — that result was from step one, still sitting in the context window. By step nine, the agent has made four downstream calls based on a fact that was never true. The workflow "succeeds." No error is logged.
This is context poisoning: not a security attack, but a reliability failure mode where the agent's own accumulated context becomes a source of wrong information. As agents run longer, interact with more tools, and manage more state, the probability of this failure climbs sharply. And unlike crashes or exceptions, context poisoning is invisible to standard monitoring.
Why Context Degrades in Long-Running Sessions
The transformer attention mechanism has a dirty secret: it doesn't treat all tokens equally over long sequences. Research on frontier models shows a consistent U-shaped pattern — models attend well to tokens at the start and end of a context window, and poorly to content buried in the middle. Once a context exceeds roughly 50% of its window capacity, performance starts to degrade measurably, and this happens well before the stated maximum context length.
For agents, four compounding mechanisms turn this quirk into a reliability crisis:
Context poisoning occurs when a hallucinated fact gets embedded in the context and subsequently treated as ground truth. The agent asked a tool whether a file existed, misread the response, and wrote "file confirmed present" into its running notes. Every downstream step that touches that file now reasons on false premises.
Context distraction appears at high token counts. As the session history grows, models begin to pattern-match against past steps rather than synthesize new plans. An agent processing its hundredth tool call starts to look like its previous fifty, with the system reinforcing historical patterns instead of responding to current state.
Context confusion emerges from tool definition sprawl. Each tool definition added to the system prompt consumes tokens and introduces decision overhead. Studies show accuracy degrades measurably when agents have more than one tool, with the failure rate worsening on smaller models and growing non-linearly as the tool count rises past thirty.
Context clash is the most damaging. When an agent takes a wrong turn midway through a task, the incorrect reasoning remains in the context window and continues to influence all subsequent steps. Benchmark data shows a 39% average performance drop when earlier errors persist in context — the model doesn't detect the contradiction; it rationalizes around it.
The Cascading Failure Pattern
The dangerous property of context poisoning isn't the initial wrong fact. It's the downstream amplification.
An inventory agent hallucinates a nonexistent SKU. This wrong value gets passed to a pricing tool, which computes a cost. The cost gets passed to a shipping estimator. The shipping estimate triggers a customer notification. By the time a human sees an anomalous order, the original error has been laundered through four systems and presented with the full appearance of legitimate work. The agent didn't fail catastrophically — it succeeded at each step, operating on corrupted inputs.
This pattern is especially sharp in multi-agent pipelines. When Agent A's output feeds Agent B, which feeds Agent C, a single poisoned fact in step three of Agent A's workflow can produce a subtly wrong final output that looks completely plausible. The error doesn't trigger any exception handler because nothing technically malfunctioned.
The implication: standard error monitoring catches binary failures. It's entirely blind to the category of failures where the agent runs to completion and returns a wrong answer.
What Session Health Actually Looks Like
Most teams think of session health as "did the agent finish without crashing." Production experience reveals a different set of signals worth watching:
Tool output staleness: When an agent fetches a resource in step two and references it in step eleven, is it re-querying or relying on a nine-step-old value? Stale reads are especially dangerous in workflows that span minutes or hours, where external state can change underneath an in-flight agent.
Reasoning coherence drift: Intermediate reasoning steps that progressively contradict earlier steps are an early indicator. An agent that says "the file does not exist" in step four but "processing file contents" in step seven has experienced context corruption, not goal completion.
Tool call redundancy: Agents under context pressure frequently re-invoke tools they've already called, often arriving at slightly different results. A spike in duplicate tool calls signals the agent has lost track of what it already knows.
Instruction compliance decay: The initial system prompt is typically the most important input, but it becomes the least attended content as the session grows. Agents that initially follow formatting and constraint rules will gradually drift away from them as context fills. Compliance monitoring against fixed invariants is a cheap proxy for overall context health.
Checkpointing: Designing for Recovery, Not Just Success
The checkpoint-restore pattern, borrowed from distributed systems, is the most reliable structural defense against context poisoning. The idea: at defined milestones, persist enough agent state to restart from a known-good position rather than replaying from the beginning.
For agentic workflows, effective checkpointing involves three elements:
State snapshots at decision points: Before any irreversible action — sending a message, writing to a database, calling an external API — capture the full agent state. This isn't just context; it's the structured representation of what the agent believes to be true. If subsequent steps reveal corruption, you can rewind to the last clean snapshot.
- https://www.dbreunig.com/2025/06/22/how-contexts-fail-and-how-to-fix-them.html
- https://galileo.ai/blog/agent-failure-modes-guide
- https://neuraltrust.ai/blog/memory-context-poisoning
- https://research.trychroma.com/context-rot
- https://redis.io/blog/context-rot/
- https://eunomia.dev/blog/2025/05/11/checkpointrestore-systems-evolution-techniques-and-applications-in-ai-agents/
- https://milvus.io/blog/keeping-ai-agents-grounded-context-engineering-strategies-that-prevent-context-rot-using-milvus.md
- https://dev.to/willvelida/preventing-cascading-failures-in-ai-agents-p3c
- https://tacnode.io/post/stateful-vs-stateless-ai-agents-practical-architecture-guide-for-developers
