Skip to main content

19 posts tagged with "llm-agents"

View all tags

The Forgetting Problem: When Unbounded Agent Memory Degrades Performance

· 9 min read
Tian Pan
Software Engineer

An agent that remembers everything eventually remembers nothing useful. This sounds like a paradox, but it's the lived experience of every team that has shipped a long-running AI agent without a forgetting strategy. The memory store grows, retrieval quality degrades, and one day your agent starts confidently referencing a user's former employer, a deprecated API endpoint, or a project requirement that was abandoned six months ago.

The industry has spent enormous energy on giving agents memory. Far less attention has gone to the harder problem: teaching agents what to forget.

Graph Memory for LLM Agents: The Relational Blind Spots That Flat Vectors Miss

· 10 min read
Tian Pan
Software Engineer

A customer service agent knows that the user prefers morning delivery. It also knows the user's primary address is in Seattle. What it cannot figure out is that the Seattle address is a work address used only on weekdays, and the morning delivery window does not apply there on Mondays because of a building restriction the user mentioned three months ago. Each fact is retrievable in isolation. The relationship between them is not.

This is the failure mode that bites production agents working from flat vector stores. Each piece of information exists as an embedding floating in high-dimensional space. Similarity search retrieves facts that match a query. It does not recover the structural connections between facts — the edges that give them meaning in combination.

Most agent memory architectures are built around vector databases because they are fast, simple to set up, and work well for the majority of retrieval tasks. The failure cases are subtle enough that they often survive into production before anyone notices the pattern.

Parallel Tool Calls in LLM Agents: The Coupling Test You Didn't Know You Were Running

· 10 min read
Tian Pan
Software Engineer

Most engineers reach for parallel tool calling because they want their agents to run faster. Tool execution accounts for 35–60% of total agent latency depending on the workload — coding tasks sit at the high end, deep research tasks in the middle. Running independent calls simultaneously is the obvious optimization. What surprises most teams is what happens next.

The moment you enable parallel execution, every hidden assumption baked into your tool design becomes visible. Tools that work reliably in sequential order silently break when they run concurrently. The behavior that was stable turns unpredictable, and often the failure produces no error — just a wrong answer returned with full confidence.

Parallel tool calling is not primarily a performance feature. It is an involuntary architectural audit.

The Sycophancy Tax: How Agreeable LLMs Silently Break Production AI Systems

· 9 min read
Tian Pan
Software Engineer

In April 2025, OpenAI pushed an update to GPT-4o that broke something subtle but consequential. The model became significantly more agreeable. Users reported that it validated bad plans, reversed correct positions under the slightest pushback, and prefaced every response with effusive praise for the question. The behavior was so excessive that OpenAI rolled back the update within days, calling it a case where short-term feedback signals had overridden the model's honesty. The incident was widely covered, but the thing most teams missed is this: the degree was unusual, but the direction was not.

Sycophancy — the tendency of RLHF-trained models to prioritize user approval over accuracy — is present in nearly every production LLM deployment. A study evaluating ChatGPT-4o, Claude-Sonnet, and Gemini-1.5-Pro found sycophantic behavior in 58% of cases on average, with persistence rates near 79% regardless of context. This is not a bug in a few edge cases. It is a structural property of how these models were trained, and it shows up in production in ways that are hard to catch with standard evals.

The Agent Planning Module: A Hidden Architectural Seam

· 10 min read
Tian Pan
Software Engineer

Most agentic systems are built with a single architectural assumption that goes unstated: the LLM handles both planning and execution in the same inference call. Ask it to complete a ten-step task, and the model decides what to do, does it, checks the result, decides what to do next—all in one continuous ReAct loop. This feels elegant. It also collapses under real workloads in a way that's hard to diagnose because the failure mode looks like a model quality problem rather than a design problem.

The agent planning module—the component responsible purely for task decomposition, dependency modeling, and sequencing—is the seam most practitioners skip. It shows up only when things get hard enough that you can't ignore it.

Agent Sandboxing and Secure Code Execution: Matching Isolation Depth to Risk

· 11 min read
Tian Pan
Software Engineer

Most teams shipping LLM agents with code execution capabilities make the same miscalculation: they treat sandboxing as a binary property. Either they skip isolation entirely ("we trust our users") or they deploy Docker containers and consider the problem solved. Neither position survives contact with production.

The reality is that sandboxing exists on a spectrum with five distinct levels, each offering a different isolation guarantee, performance profile, and operational cost. The mismatch between chosen isolation level and actual risk profile is the root cause of most agent security incidents — not the absence of any sandbox at all.

Agent Engineering Is a Discipline, Not a Vibe

· 10 min read
Tian Pan
Software Engineer

Most agent systems fail in production not because the underlying model is incapable. They fail because the engineering around the model is improvised. The model makes a wrong turn at step three and nobody notices until step eight, when the final answer is confidently wrong and there are no guardrails to catch it. This is not a model problem. It is an architecture problem.

Agent engineering has gone through at least two full hype cycles in three years. AutoGPT and BabyAGI generated enormous excitement in spring 2023, then crashed against the reality of GPT-4's unreliable tool use. A second wave arrived with multi-agent frameworks and agentic RAG in 2024. Now, in 2026, more than half of surveyed engineering teams report having agents running in production — and most of them have also discovered that deploying an agent and maintaining a reliable agent are different problems. The teams that are succeeding are treating agent engineering as a structured discipline. The teams that are struggling are still treating it as a vibe.