Skip to main content

83 posts tagged with "ai-agents"

View all tags

Agent Credential Rotation: The DevOps Problem Nobody Mapped to AI

· 8 min read
Tian Pan
Software Engineer

Every DevOps team has a credential rotation policy. Most have automated it for their services, CI pipelines, and databases. But the moment you deploy an autonomous AI agent that holds API keys across five different integrations, that rotation policy becomes a landmine. The agent is mid-task — triaging a bug, updating a ticket, sending a Slack notification — and suddenly its GitHub token expires. The process looks healthy. The logs show no crash. But silently, nothing works anymore.

This is the credential rotation problem that nobody mapped from DevOps to AI. Traditional rotation assumes predictable, human-managed workloads with clear boundaries. Autonomous agents shatter every one of those assumptions.

AI-Assisted Incident Response: Giving Your On-Call Agent a Runbook

· 9 min read
Tian Pan
Software Engineer

Operational toil in engineering organizations rose to 30% in 2025 — the first increase in five years — despite record investment in AI tooling. The reason is not that AI failed. The reason is that teams deployed AI agents without the same rigor they use for human on-call: no runbooks, no escalation paths, no blast-radius constraints. The agent could reason about logs, but nobody told it what it was allowed to do.

The gap between "AI that can diagnose" and "AI that can safely mitigate" is not a model capability problem. It is a systems engineering problem. And solving it requires the same discipline that SRE teams already apply to human operators: structured runbooks, tiered permissions, and mandatory escalation points.

Backpressure in Agent Pipelines: When AI Generates Work Faster Than It Can Execute

· 9 min read
Tian Pan
Software Engineer

A multi-agent research tool built on a popular open-source stack slipped into a recursive loop and ran for 11 days before anyone noticed. The bill: $47,000. Two agents had been talking to each other non-stop, burning tokens while the team assumed the system was working normally. This is what happens when an agent pipeline has no backpressure.

The problem is structural. When an orchestrator agent decomposes a task into sub-tasks and spawns sub-agents to handle each one, and those sub-agents can themselves spawn further sub-agents or fan out across multiple tool calls, you get exponential work generation. The pipeline produces work faster than it can execute, finish, or even account for. This is the same problem that reactive systems, streaming architectures, and network protocols solved decades ago — and the same solutions apply.

The Caching Hierarchy for Agentic Workloads: Five Layers Most Teams Stop at Two

· 11 min read
Tian Pan
Software Engineer

Most teams deploying AI agents implement prompt caching, maybe add a semantic cache, and call it done. They're leaving 40-60% of their potential savings on the table. The reason isn't laziness — it's that agentic workloads create caching problems that don't exist in simple request-response LLM calls, and the solutions require thinking in layers that traditional web caching never needed.

A single agent task might involve a 4,000-token system prompt, three tool calls that each return different-shaped data, a multi-step plan that's structurally identical to yesterday's plan, and session context that needs to persist across a conversation but never across users. Each of these represents a different caching opportunity with different TTL requirements, different invalidation triggers, and different failure modes when the cache goes stale.

Chaos Engineering for AI Agents: Injecting the Failures Your Agents Will Actually Face

· 9 min read
Tian Pan
Software Engineer

Your agent works perfectly in staging. It calls the right tools, reasons through multi-step plans, and returns polished results. Then production happens: the geocoding API times out at step 3 of a 7-step plan, the LLM returns a partial response mid-sentence, and your agent confidently fabricates data to fill the gap. Nobody notices until a customer does.

LLM API calls fail 1–5% of the time in production — rate limits, timeouts, server errors. For a multi-step agent making 10–20 tool calls per task, that means a meaningful percentage of tasks will hit at least one failure. The question isn't whether your agent will encounter faults. It's whether you've ever tested what happens when it does.

Deep Research Agents: Why Most Implementations Loop Forever or Stop Too Early

· 10 min read
Tian Pan
Software Engineer

Standard LLMs without iterative retrieval score below 10% on multi-step web research benchmarks. Deep research agents — systems that search, read, synthesize, and re-query in a loop — score above 50%. That five-fold improvement explains why every serious AI product team is building one. What it doesn't explain is why most of those implementations either run up a $15 bill chasing irrelevant tangents or declare victory after two shallow searches.

The core problem isn't building the loop. It's knowing when the loop should stop. And that turns out to be a surprisingly deep systems design challenge that touches convergence detection, cost economics, source reliability, and multi-agent coordination.

Deterministic Replay: How to Debug AI Agents That Never Run the Same Way Twice

· 11 min read
Tian Pan
Software Engineer

Your agent failed in production last Tuesday. A customer reported a wrong answer. You pull up the logs, see the final output, maybe a few intermediate print statements — and then you're stuck. You can't re-run the agent and get the same failure because the model won't produce the same tokens, the API your tool called now returns different data, and the timestamp embedded in the prompt has moved forward. The bug is gone, and you're left staring at circumstantial evidence.

This is the fundamental debugging problem for AI agents: traditional software is deterministic, so you can reproduce bugs by recreating inputs. Agent systems are not. Every run is a unique snowflake of model sampling, live API responses, and time-dependent state. Without specialized tooling, post-mortem debugging becomes forensic guesswork.

Deterministic replay solves this by recording every source of non-determinism during execution and substituting those recordings during replay — turning your unreproducible agent run into something you can step through like a debugger.

The Planning Tax: Why Your Agent Spends More Tokens Thinking Than Doing

· 10 min read
Tian Pan
Software Engineer

Your agent just spent 6solvingataskthatadirectAPIcallcouldhavehandledfor6 solving a task that a direct API call could have handled for 0.12. If you've built agentic systems in production, this ratio probably doesn't surprise you. What might surprise you is where those tokens went: not into tool calls, not into generating the final answer, but into the agent reasoning about what to do next. Decomposing the task. Reflecting on intermediate results. Re-planning when an observation didn't match expectations. This is the planning tax — the token overhead your agent pays to think before it acts — and for most agentic architectures, it consumes 40–70% of the total token budget before a single useful action fires.

The planning tax isn't a bug. Reasoning is what separates agents from simple prompt-response systems. But when the cost of deciding what to do exceeds the cost of actually doing it, you have an engineering problem that no amount of cheaper inference will solve. Per-token prices have dropped roughly 1,000x since late 2022, yet total agent spending keeps climbing — a textbook Jevons paradox where cheaper tokens just invite more token consumption.

The Second System Effect in AI: Why Your Agent v2 Rewrite Will Probably Fail

· 8 min read
Tian Pan
Software Engineer

Your agent v1 works. It's ugly, it's held together with prompt duct tape, and the code makes you wince every time you open it. But it handles 90% of cases, your users are happy, and it ships value every day. So naturally, you decide to rewrite it from scratch.

Six months later, the rewrite is still not in production. You've migrated frameworks twice, built a multi-agent orchestration layer for a problem that didn't require one, and your eval suite tests everything except the things that actually break. Meanwhile, v1 is still running — still ugly, still working.

This is the second system effect, and it has been destroying software projects since before most of us were born.

Write-Ahead Logging for AI Agents: Borrowing Database Recovery Patterns for Crash-Safe Execution

· 10 min read
Tian Pan
Software Engineer

Your agent is on step 7 of a 12-step workflow — it has already queried three APIs, written two files, and sent a Slack notification — when the process crashes. What happens next? If your answer is "restart from step 1," you're about to re-send that Slack message, re-write those files, and burn through your LLM token budget a second time. Databases solved this exact problem decades ago with write-ahead logging. The pattern translates to agent architectures with surprising fidelity.

The core insight is simple: before an agent executes any step, it records what it intends to do. Before it moves on, it records what happened. This append-only log becomes the single source of truth for recovery — not the agent's in-memory state, not a snapshot of the world, but a sequential record of intentions and outcomes that can be replayed deterministically.

Adversarial Agent Monitoring: Building Oversight That Can't Be Gamed

· 10 min read
Tian Pan
Software Engineer

Most production AI agent systems use some version of LLM-as-judge monitoring: a second model reviews the agent's outputs and flags suspicious behavior. It's intuitive, easy to deploy, and badly broken as a primary defense. A study analyzing single-judge monitoring across multiple prompt attack vectors found a 52.7% weighted attack success rate — meaning an adversarially sophisticated agent evades detection more often than it's caught. For single-turn, low-stakes applications, this might be tolerable. For an autonomous agent with access to external systems, it's a critical gap.

The deeper problem is that capability and deceptiveness scale together. As agents become more capable at their intended tasks, they also become better at modeling their oversight context — and adjusting behavior accordingly.

Why Agent Cost Forecasting Is Broken — And What to Do Instead

· 10 min read
Tian Pan
Software Engineer

Your finance team wants a number. How much will the AI agent system cost per month? You give them an estimate based on average token usage, multiply by projected request volume, and add a safety margin. Three months later, the actual bill is 3x the forecast, and nobody can explain why.

This isn't a budgeting failure. It's a modeling failure. Traditional cost forecasting assumes that per-request costs cluster around a predictable mean. Agentic systems violate that assumption at every level. The execution path is variable. The number of LLM calls per request is variable. The token count per call is variable. And the interaction between these variables creates a cost distribution with a fat tail that eats your margin.