Skip to main content

22 posts tagged with "context-engineering"

View all tags

The Action Space Problem: Why Giving Your AI Agent More Tools Makes It Worse

· 9 min read
Tian Pan
Software Engineer

There's a counterintuitive failure mode that most teams encounter when scaling AI agents: the more capable you make the agent's toolset, the worse it performs. You add tools to handle more cases. Accuracy drops. You add better tools. It gets slower and starts picking the wrong ones. You add orchestration to manage the tool selection. Now you've rebuilt complexity on top of the original complexity, and the thing barely works.

The instinct to add is wrong. The performance gains in production agents come from removing things.

Four Strategies for Engineering Agent Context That Actually Scales

· 8 min read
Tian Pan
Software Engineer

There's a failure mode in production agents that most engineers discover the hard way: your agent works well on the first few steps, then starts hallucinating halfway through a task, misses details it was explicitly given at the start, or issues a tool call that contradicts instructions it received twenty steps ago. The model didn't change. The task didn't get harder. The context did.

Long-running agents accumulate history the way browser tabs accumulate memory — silently, relentlessly, until something breaks. Every tool response, observation, and intermediate reasoning trace gets appended to the window. The model sees all of it, which means it has to reason through all of it on every subsequent step. As context grows, precision drops, reasoning weakens, and the model misses information it should catch. This is context rot, and it's one of the most common failure modes in production agents.

Context Engineering: Memory, Compaction, and Tool Clearing for Production Agents

· 10 min read
Tian Pan
Software Engineer

Most production AI agent failures don't happen because the model ran out of context. They happen because the model drifted long before it hit the limit. Forrester has named "agent drift" the silent killer of AI-accelerated development — and Forrester research from 2025 shows that nearly 65% of enterprise AI failures trace back to context drift or memory loss during multi-step reasoning, not raw token exhaustion.

The distinction matters. A hard context limit is clean: the API rejects the request, the agent stops, you get an error you can handle. Context rot is insidious: the model keeps running, keeps generating output, but performance quietly degrades. GPT-4's accuracy drops from 98.1% to 64.1% based solely on where in the context window information is positioned. You don't get an error signal — you get subtly wrong answers.

This post covers the three primary tools for managing context in production agents — compaction, tool-result clearing, and external memory — along with the practical strategies for applying them before your agent drifts.

CLAUDE.md and AGENTS.md: The Configuration Layer That Makes AI Coding Agents Actually Follow Your Rules

· 9 min read
Tian Pan
Software Engineer

Your AI coding agent doesn't remember yesterday. Every session starts cold — it doesn't know you use yarn not npm, that you avoid any types, or that the src/generated/ directory is sacred and should never be edited by hand. So it generates code with the wrong package manager, introduces any where you've banned it, and occasionally overwrites generated files you'll spend an hour recovering. You correct it. Tomorrow it makes the same mistake. You correct it again.

This is not a model quality problem. It's a configuration problem — and the fix is a plain Markdown file.

CLAUDE.md, AGENTS.md, and their tool-specific cousins are the briefing documents AI coding agents read before every session. They encode what the agent would otherwise have to rediscover or be corrected on: which commands to run, which patterns to avoid, how your team's workflow is structured, and which directories are off-limits. They're the equivalent of a thorough engineering onboarding document, compressed into a form optimized for machine consumption.

Effective Context Engineering for AI Agents

· 11 min read
Tian Pan
Software Engineer

Nearly 65% of enterprise AI failures in 2025 traced back to context drift or memory loss during multi-step reasoning — not model capability issues. If your agent is making poor decisions or losing coherence across a long task, the most likely cause is not the model. It is what is sitting in the context window.

The term "context engineering" is proliferating fast, but the underlying discipline is concrete: active, deliberate management of what enters and exits the LLM's context window at every inference step in an agent's trajectory. Not a prompt. A dynamic information architecture that the engineer designs and the agent traverses. The context window functions as RAM — finite, expensive, and subject to thrashing if you don't manage it deliberately.

Harness Engineering: The Discipline That Determines Whether Your AI Agents Actually Work

· 10 min read
Tian Pan
Software Engineer

Most teams running AI coding agents are optimizing the wrong variable. They obsess over model selection — Claude vs. GPT vs. Gemini — while treating the surrounding scaffolding as incidental plumbing. But benchmark data and production war stories tell a different story: the gap between a model that impresses in a demo and one that ships production code reliably comes almost entirely from the harness around it, not the model itself.

The formula is deceptively simple: Agent = Model + Harness. The harness is everything else — tool schemas, permission models, context lifecycle management, feedback loops, sandboxing, documentation infrastructure, architectural invariants. Get the harness wrong and even a frontier model produces hallucinated file paths, breaks its own conventions twenty turns into a session, and declares a feature done before writing a single test.

Context Engineering: The Discipline That Matters More Than Prompting

· 9 min read
Tian Pan
Software Engineer

Most engineers building LLM systems spend the first few weeks obsessing over their prompts. They A/B test phrasing, argue about whether to use XML tags or JSON, and iterate on system prompt wording until the model outputs something that looks right. Then they hit production, add real data, memory, and tool calls — and the model starts misbehaving in ways that no amount of prompt tuning can fix. The problem was never the prompt.

The real bottleneck in production LLM systems is context — what information is present in the model's input, in what order, how much of it there is, and whether it's relevant to the decision the model is about to make. Context engineering is the discipline of designing and managing that input space as a first-class system concern. It subsumes prompt engineering the same way software architecture subsumes variable naming: the smaller skill still matters, but it doesn't drive outcomes at scale.

Context Engineering: The Invisible Architecture of Production AI Agents

· 10 min read
Tian Pan
Software Engineer

Most AI agent bugs are not model bugs. The model is doing exactly what it's told—it's what you're putting into the context that's broken. After a certain point in an agent's execution, the problem isn't capability. It's entropy: the slow accumulation of noise, redundancy, and misaligned attention that degrades every output the model produces. Researchers call this context rot, and every major model—GPT-4.1, Claude Opus 4, Gemini 2.5—exhibits it, at every input length increment, without exception.

Context engineering is the discipline of managing this problem deliberately. It's broader than prompt engineering, which is mostly about the static system prompt. Context engineering covers everything the model sees at inference time: what you include, what you exclude, what you compress, where you position things, and how you preserve cache state across a long-running task.

Why Your AI Agent Wastes Most of Its Context Window on Tools

· 10 min read
Tian Pan
Software Engineer

You connect your agent to 50 MCP tools. It can query databases, call APIs, read files, send emails, browse the web. On paper, it has everything it needs. In practice, half your production incidents trace back to tool use—wrong parameters, blown context budgets, cascading retry loops that cost ten times what you expected.

Here's the part most tutorials skip: every tool definition you load is a token tax paid upfront, before the agent processes a single user message. With 50+ tools connected, definitions alone can consume 70,000–130,000 tokens per request. That's not a corner case—it's the default state of any agent connected to multiple MCP servers.

Context Engineering for Personalization: How to Build Long-Term Memory Into AI Agents

· 8 min read
Tian Pan
Software Engineer

Most agent demos are stateless. A user asks a question, the agent answers, the session ends — and the next conversation starts from scratch. That's fine for a calculator. It's not fine for an assistant that's supposed to know you.

The gap between a useful agent and a frustrating one often comes down to one thing: whether the system remembers what matters. This post breaks down how to architect durable, personalized memory into production AI agents — covering the four-phase lifecycle, layered precedence rules, and the specific failure modes that will bite you if you skip the engineering.