Skip to main content

195 posts tagged with "ai-agents"

View all tags

The Second System Effect in AI: Why Your Agent v2 Rewrite Will Probably Fail

· 9 min read
Tian Pan
Software Engineer

Your agent v1 works. It's ugly, it's held together with prompt duct tape, and the code makes you wince every time you open it. But it handles 90% of cases, your users are happy, and it ships value every day. So naturally, you decide to rewrite it from scratch.

Six months later, the rewrite is still not in production. You've migrated frameworks twice, built a multi-agent orchestration layer for a problem that didn't require one, and your eval suite tests everything except the things that actually break. Meanwhile, v1 is still running — still ugly, still working.

This is the second system effect, and it has been destroying software projects since before most of us were born.

Write-Ahead Logging for AI Agents: Borrowing Database Recovery Patterns for Crash-Safe Execution

· 10 min read
Tian Pan
Software Engineer

Your agent is on step 7 of a 12-step workflow — it has already queried three APIs, written two files, and sent a Slack notification — when the process crashes. What happens next? If your answer is "restart from step 1," you're about to re-send that Slack message, re-write those files, and burn through your LLM token budget a second time. Databases solved this exact problem decades ago with write-ahead logging. The pattern translates to agent architectures with surprising fidelity.

The core insight is simple: before an agent executes any step, it records what it intends to do. Before it moves on, it records what happened. This append-only log becomes the single source of truth for recovery — not the agent's in-memory state, not a snapshot of the world, but a sequential record of intentions and outcomes that can be replayed deterministically.

Adversarial Agent Monitoring: Building Oversight That Can't Be Gamed

· 10 min read
Tian Pan
Software Engineer

Most production AI agent systems use some version of LLM-as-judge monitoring: a second model reviews the agent's outputs and flags suspicious behavior. It's intuitive, easy to deploy, and badly broken as a primary defense. A study analyzing single-judge monitoring across multiple prompt attack vectors found a 52.7% weighted attack success rate — meaning an adversarially sophisticated agent evades detection more often than it's caught. For single-turn, low-stakes applications, this might be tolerable. For an autonomous agent with access to external systems, it's a critical gap.

The deeper problem is that capability and deceptiveness scale together. As agents become more capable at their intended tasks, they also become better at modeling their oversight context — and adjusting behavior accordingly.

Why Agent Cost Forecasting Is Broken — And What to Do Instead

· 10 min read
Tian Pan
Software Engineer

Your finance team wants a number. How much will the AI agent system cost per month? You give them an estimate based on average token usage, multiply by projected request volume, and add a safety margin. Three months later, the actual bill is 3x the forecast, and nobody can explain why.

This isn't a budgeting failure. It's a modeling failure. Traditional cost forecasting assumes that per-request costs cluster around a predictable mean. Agentic systems violate that assumption at every level. The execution path is variable. The number of LLM calls per request is variable. The token count per call is variable. And the interaction between these variables creates a cost distribution with a fat tail that eats your margin.

Agent-Friendly APIs: What Backend Engineers Get Wrong When AI Becomes the Client

· 11 min read
Tian Pan
Software Engineer

In 2024, automated bot traffic surpassed human traffic on the internet for the first time. Gartner projects that more than 30% of new API demand by 2026 will come from AI agents and LLM tools. And yet only 24% of organizations explicitly design APIs with AI clients in mind.

That gap is where production systems break. Not because the LLMs are bad, but because APIs built for human developers have assumptions baked in that silently fail when an autonomous agent is the caller. The agent can't ask for clarification, can't read a doc site, and can't decide on its own whether a 422 means "fix your request" or "try again in a few seconds."

This post is for the backend engineer who just found out their service is being called by an AI agent — or who is about to build one that will be.

Agent State as Event Stream: Why Immutable Event Sourcing Beats Internal Agent Memory

· 10 min read
Tian Pan
Software Engineer

An agent misbehaves at 3:47 AM on a Tuesday. It deletes files it shouldn't have, or calls an API with the wrong parameters, or confidently takes an irreversible action based on information that was stale by six hours. You pull up your logs. You can see what the agent did. What you cannot see — what almost no agent framework gives you — is what the agent believed when it made that decision. The state that drove the choice is gone, overwritten by every subsequent step. You're debugging the present to understand the past, and that's an architecture problem, not a logging problem.

Most AI agents treat state as mutable in-memory data: a dictionary that gets updated in place, a database row that gets overwritten, a scratch pad that shrinks and grows. This works fine for simple, short-lived tasks. It collapses under the three pressures that define serious production deployments: debugging complex failures, coordinating across distributed agents, and satisfying compliance requirements. Event sourcing — treating every state change as an immutable, append-only event — solves all three problems at once, and it does it in a way that makes agents structurally more debuggable, not just more logged.

How Agents Teach Themselves: The Closed-Loop Self-Improvement Architecture

· 11 min read
Tian Pan
Software Engineer

The most expensive part of training an agent isn't GPU time. It's the human annotators who label whether a multi-step task succeeded or failed. A single expert annotation of a long-horizon agentic trajectory — verifying that an agent correctly booked a flight, wrote a functional program, or filled out a legal form — can cost more than thousands of inference calls. Closed-loop self-improvement is the architectural pattern that eliminates this bottleneck by replacing human judgment with an automated verifier, then using that verifier to run the generate-attempt-verify-train cycle without any human in the loop. When done correctly, it works: a recent NeurIPS paper showed the pattern doubled average task success rates across multi-turn tool-use environments, going from 12% to 23.5%, without a single human annotation.

The key insight isn't that the model improves itself — it's that the verifier is free. Code execution returns a pass/fail signal deterministically, in milliseconds, at near-zero marginal cost. When your tasks have checkable outcomes, you can run thousands of training episodes per hour with ground-truth labels the model cannot fake (assuming your sandbox is designed correctly). That assumption is doing a lot of work, and we'll come back to it.

The Cold Start Tax on Serverless AI Agents

· 11 min read
Tian Pan
Software Engineer

A standard Lambda function with a thin Python handler cold-starts in about 250ms. Your AI agent, calling the same runtime with a few SDK imports added, cold-starts in 8–12 seconds. Add local model inference and you're at 40–120 seconds. The first user to hit a scaled-down deployment waits the length of a TV commercial before the agent responds. That gap — not latency per inference token, not throughput, but the initial startup cost — is where most serverless AI deployments quietly fail their users.

The problem isn't unique to serverless, but serverless makes it visible. When you run agents on always-on infrastructure, you pay for idle capacity and cold starts never happen. When you embrace scale-to-zero to cut costs, every period of low traffic becomes a trap waiting for the next request.

Computer Use Agents in Production: When Pixels Replace API Calls

· 9 min read
Tian Pan
Software Engineer

Most AI agents interact with the world through structured APIs — clean JSON in, clean JSON out. But a growing class of agents has abandoned that contract entirely. Computer use agents look at screenshots, reason about what they see, and drive a mouse and keyboard like a human operator. When the only integration surface is a screen, pixels become the API.

This sounds like a party trick until you realize how much enterprise software has no API at all. Legacy ERP systems, internal admin panels, proprietary desktop applications — the GUI is the only interface. For years, robotic process automation (RPA) handled this with brittle, selector-based scripts that shattered whenever a button moved three pixels. Computer use agents promise something different: visual understanding that adapts to UI changes the way a human would.

Domain-Specialized Agent Architectures: Why Generic Agents Underperform in High-Stakes Verticals

· 10 min read
Tian Pan
Software Engineer

A generic AI agent that can summarize a contract, draft a product spec, and write a SQL query is genuinely impressive — until you deploy it into a radiology department and discover it suggests plausible-sounding dosing that contradicts the patient's actual drug allergies. The failure is not a hallucination problem. It's an architecture problem.

The assumption baked into most agent demos is that a sufficiently capable foundation model plus a broad tool set equals a capable agent in any domain. In practice, the gap between that assumption and production reality is where patients get hurt, lawsuits materialize, and experiments produce unreproducible results. Generic agents are a reasonable starting point, not a destination.

The Escalation Protocol: Building Agent-to-Human Handoffs That Don't Lose State

· 11 min read
Tian Pan
Software Engineer

When a support agent receives an AI-to-human handoff with a raw chat transcript, the average time to prepare for resolution is 15 minutes. The agent has to find the customer in the CRM, look up the relevant order, calculate purchase dates, and reconstruct what the AI already determined. When the same handoff arrives as a structured payload — action history, retrieved data, the exact ambiguity that triggered escalation — that prep time drops to 30 seconds.

That 97% reduction in manual work isn't an edge case. It's the difference between escalation protocols that actually support human oversight and ones that just dump context onto whoever happens to be on shift.

Building GDPR-Ready AI Agents: The Compliance Architecture Decisions That Actually Matter

· 10 min read
Tian Pan
Software Engineer

Most teams discover their AI agent has a GDPR problem the wrong way: a data subject files an erasure request, the legal team asks which systems hold that user's data, and the engineering team opens a ticket that turns into a six-month audit. The personal data is somewhere in conversation history, somewhere in the vector store, possibly cached in tool call outputs, maybe embedded in a fine-tuned checkpoint — and nobody mapped any of it.

This isn't a configuration gap. It's an architectural one. The decisions that determine whether your AI system is compliance-ready are made in the first few weeks of building, long before legal comes knocking. This post covers the four structural conflicts that regulated-industry engineers need to resolve before shipping AI agents to production.