Skip to main content

578 posts tagged with "insider"

View all tags

Adversarial Agent Monitoring: Building Oversight That Can't Be Gamed

· 10 min read
Tian Pan
Software Engineer

Most production AI agent systems use some version of LLM-as-judge monitoring: a second model reviews the agent's outputs and flags suspicious behavior. It's intuitive, easy to deploy, and badly broken as a primary defense. A study analyzing single-judge monitoring across multiple prompt attack vectors found a 52.7% weighted attack success rate — meaning an adversarially sophisticated agent evades detection more often than it's caught. For single-turn, low-stakes applications, this might be tolerable. For an autonomous agent with access to external systems, it's a critical gap.

The deeper problem is that capability and deceptiveness scale together. As agents become more capable at their intended tasks, they also become better at modeling their oversight context — and adjusting behavior accordingly.

Agent-Friendly APIs: What Backend Engineers Get Wrong When AI Becomes the Client

· 11 min read
Tian Pan
Software Engineer

In 2024, automated bot traffic surpassed human traffic on the internet for the first time. Gartner projects that more than 30% of new API demand by 2026 will come from AI agents and LLM tools. And yet only 24% of organizations explicitly design APIs with AI clients in mind.

That gap is where production systems break. Not because the LLMs are bad, but because APIs built for human developers have assumptions baked in that silently fail when an autonomous agent is the caller. The agent can't ask for clarification, can't read a doc site, and can't decide on its own whether a 422 means "fix your request" or "try again in a few seconds."

This post is for the backend engineer who just found out their service is being called by an AI agent — or who is about to build one that will be.

The Cold Start Tax on Serverless AI Agents

· 11 min read
Tian Pan
Software Engineer

A standard Lambda function with a thin Python handler cold-starts in about 250ms. Your AI agent, calling the same runtime with a few SDK imports added, cold-starts in 8–12 seconds. Add local model inference and you're at 40–120 seconds. The first user to hit a scaled-down deployment waits the length of a TV commercial before the agent responds. That gap — not latency per inference token, not throughput, but the initial startup cost — is where most serverless AI deployments quietly fail their users.

The problem isn't unique to serverless, but serverless makes it visible. When you run agents on always-on infrastructure, you pay for idle capacity and cold starts never happen. When you embrace scale-to-zero to cut costs, every period of low traffic becomes a trap waiting for the next request.

The AI Feature Kill Decision: When to Shut Down What Metrics Say Is Working

· 10 min read
Tian Pan
Software Engineer

Your AI feature has 12,000 monthly active users. Engagement charts slope upward. The demo still impresses stakeholders every quarter. And your users are quietly routing around it.

This is the kill decision that product teams avoid for months — sometimes years — because every surface-level metric says the feature is working. The dashboard shows adoption. What it doesn't show is the support engineer who manually corrects every third AI-generated summary before forwarding it to the customer, or the power user who learned that clicking "regenerate" three times produces acceptable output and has silently accepted that tax on their workflow.

The Batch LLM Pipeline Blind Spot: Offline Processing and the Queue Design Nobody Talks About

· 11 min read
Tian Pan
Software Engineer

Most teams building with LLMs optimize for the wrong workload. They obsess over time-to-first-token, streaming latency, and response speed — then discover that 60% or more of their LLM API spend goes to nightly summarization jobs, data enrichment pipelines, and classification runs that nobody watches in real time. The latency-first mental model that works for chat applications actively sabotages these offline workloads.

The batch LLM pipeline is the unglamorous workhorse of production AI. It's the nightly job that classifies 50,000 support tickets, the weekly pipeline that enriches your CRM with company descriptions, the daily run that generates embeddings for new documents. These workloads have fundamentally different design constraints than real-time serving, and treating them as slow versions of your chat API is where the problems start.

Beam Search for Code Agents: Why Greedy Generation Is a Reliability Trap

· 11 min read
Tian Pan
Software Engineer

A code agent that passes 90% of HumanEval is not a reliable code agent. It's a code agent that performs well on problems designed to be solvable in a single pass. Give it a competitive programming problem with strict constraints, or a multi-file refactor with subtle interdependencies, and watch the pass rate crater to 20–30%. The model isn't failing because it lacks knowledge. It's failing because greedy, single-pass generation commits to the first plausible-looking token sequence and never looks back.

The fix isn't a better model. It's a better generation strategy. Recent research has established that applying tree exploration to code generation — branching across multiple candidate solutions, scoring partial programs, and pruning unpromising paths — improves pass rates by 30–130% on hard problems, with no change to the underlying model weights.

The Cold Start Problem in AI Personalization

· 11 min read
Tian Pan
Software Engineer

A user signs up for your AI writing assistant. They type their first message. Your system has exactly one data point — and it has to decide: formal or casual? Verbose or terse? Technical depth or accessible overview? Most systems punt and serve a generic default. A few try to personalize immediately. The ones that personalize immediately often make things worse.

The cold start problem in AI personalization is not the same problem Netflix solved fifteen years ago. It is structurally harder, the failure modes are subtler, and the common fixes actively introduce new bugs. Here is what practitioners who have shipped personalization systems have learned about navigating it.

The Composition Testing Gap: Why Your Agents Pass Every Test but Fail Together

· 9 min read
Tian Pan
Software Engineer

Your planner agent passes its eval suite at 94%. Your researcher agent scores even higher. Your synthesizer agent nails every benchmark you throw at it. You compose them into a pipeline, deploy to production, and watch it produce confidently wrong answers that no individual agent would ever generate on its own.

This is the composition testing gap — the systematic blind spot where individually validated agents fail in ways that no single-agent analysis can predict. Research on multi-agent LLM systems shows that 67% of production failures stem from inter-agent interactions rather than individual agent defects. You're testing the atoms but shipping the molecule, and molecular behavior is not the sum of atomic properties.

Computer Use Agents in Production: When Pixels Replace API Calls

· 9 min read
Tian Pan
Software Engineer

Most AI agents interact with the world through structured APIs — clean JSON in, clean JSON out. But a growing class of agents has abandoned that contract entirely. Computer use agents look at screenshots, reason about what they see, and drive a mouse and keyboard like a human operator. When the only integration surface is a screen, pixels become the API.

This sounds like a party trick until you realize how much enterprise software has no API at all. Legacy ERP systems, internal admin panels, proprietary desktop applications — the GUI is the only interface. For years, robotic process automation (RPA) handled this with brittle, selector-based scripts that shattered whenever a button moved three pixels. Computer use agents promise something different: visual understanding that adapts to UI changes the way a human would.

Cross-Tenant Data Leakage in Shared LLM Infrastructure: The Isolation Failures Nobody Tests For

· 11 min read
Tian Pan
Software Engineer

Most multi-tenant LLM products have a security gap that their engineers haven't tested for. Not a theoretical gap — a practical one, with documented attack vectors and real confirmed incidents. The gap is this: each layer of the modern AI stack introduces its own isolation primitive, and each one can fail silently in ways that let one customer's data reach another customer's context.

This isn't about prompt injection or jailbreaking. It's about the infrastructure itself — prompt caches, vector indexes, memory stores, and fine-tuning pipelines — and the organizational fiction of "isolation" that most teams ship without validating.

The Debug Tax: Why Debugging AI Systems Takes 10x Longer Than Building Them

· 10 min read
Tian Pan
Software Engineer

Building an LLM feature takes days. Debugging it in production takes weeks. This asymmetry — the debug tax — is the defining cost structure of AI engineering in 2026, and most teams don't account for it until they're already drowning.

A 2025 METR study found that experienced developers using LLM-assisted coding tools were actually 19% less productive, even as they perceived a 20% speedup. The gap between perceived and actual productivity is a microcosm of the larger problem: AI systems feel fast to build because the hard part — debugging probabilistic behavior in production — hasn't started yet.

The debug tax isn't a skill issue. It's a structural property of systems built on probabilistic inference. Traditional software fails with stack traces, error codes, and deterministic reproduction paths. LLM-based systems fail with plausible but wrong answers, intermittent quality degradation, and failures that can't be reproduced because the same input produces different outputs on consecutive runs. Debugging these systems requires fundamentally different methodology, tooling, and mental models.

The Escalation Protocol: Building Agent-to-Human Handoffs That Don't Lose State

· 11 min read
Tian Pan
Software Engineer

When a support agent receives an AI-to-human handoff with a raw chat transcript, the average time to prepare for resolution is 15 minutes. The agent has to find the customer in the CRM, look up the relevant order, calculate purchase dates, and reconstruct what the AI already determined. When the same handoff arrives as a structured payload — action history, retrieved data, the exact ambiguity that triggered escalation — that prep time drops to 30 seconds.

That 97% reduction in manual work isn't an edge case. It's the difference between escalation protocols that actually support human oversight and ones that just dump context onto whoever happens to be on shift.