Skip to main content

173 posts tagged with "ai-agents"

View all tags

Agent Credential Blast Radius: The Principal Class Your IAM Model Never Enumerated

· 11 min read
Tian Pan
Software Engineer

The security org spent a decade killing off the "service account that can do everything." Scoped tokens, short-lived credentials, JIT access, per-action audit — the whole least-privilege playbook landed and stuck. Then the AI team wired up an agent, the prompt asked for a tool catalog, and the engineer requested the broadest OAuth scope the platform would issue. The deprecated pattern is back, wearing new clothes, and this time the principal calling the API is a stochastic loop nobody is sure how to scope.

The agent has read-write on the calendar, the file store, the CRM, and the deploy pipeline because the API surface couldn't be enumerated up front. The token is long-lived because no one wired the refresh path. The audit log records the bearer, not the action. And IAM owns human and service identity, the platform team owns workload identity, the AI team owns the agent's effective permissions, and the union of those three sets is owned by no one.

The Agent Permission Prompt Has a Habituation Curve, and Your Safety Story Lives on Its Slope

· 10 min read
Tian Pan
Software Engineer

There is a number that should be on every agent product's safety dashboard, and almost nobody tracks it: the per-user approval rate over time. Ship a permission prompt for "may I send this email" or "may I run this query against production," and the curve goes the same way every time. Day one, users hesitate, read, sometimes click no. By week two, the prompt is the fifth one this hour, the cost of saying no is doing the work yourself, and the click-through rate converges to something north of 95%. The team's safety story still claims that the user approved every action. The user, in any meaningful cognitive sense, did not.

This is not a UX problem that better copy can fix. It is the same habituation phenomenon that flattened cookie banners, browser SSL warnings, and Windows UAC dialogs, applied to a substrate that operates orders of magnitude faster than any of those. A consent gate is a security control with a half-life. Ship it without measuring how fast it decays, and you ship a checkbox the user is trained to ignore by week two — and a compliance narrative that depends on a click that no longer means anything.

The Human Attention Budget Is the Constraint Your HITL System Silently Overspends

· 10 min read
Tian Pan
Software Engineer

The 50th decision your reviewer makes this morning is not the same quality as the first. The architecture diagram does not show this. The capacity model does not show this. The dashboard tracking "approvals per hour" actively hides it. And yet the entire premise of your human-in-the-loop system — that a person catches what the model gets wrong — is silently degrading from the moment the queue begins to fill.

Most HITL designs treat reviewer time as an infinite, fungible resource. The team sets a confidence threshold, routes everything below it to a human queue, and declares the system "safe." Six weeks later, the approval rate has crept up to 96%, the queue is twice as deep as the staffing model assumed, and a sample audit shows that reviewers are clicking "approve" on edge cases they would have flagged on day one. The system has not failed. It has rubber-stamped its way into looking like it is working.

Pagination Is a Tool-Catalog Discipline: Why Agents Burn Context on List Returns

· 11 min read
Tian Pan
Software Engineer

Every well-designed HTTP API in your stack returns paginated results. Nobody loads a million rows into memory and hopes for the best. Yet the tools your agent calls return the entire list, and the agent dutifully reads it, because the function signature says list_orders() -> Order[] and the agent has no protocol for "give me the next page" the way a human user has scroll-and-load-more.

The agent burns tokens on rows it could have skipped. The long-tail customer with 50K records hits context-window failures the median customer never sees. The tool author cannot tell from the trace whether the agent needed all those rows or simply could not ask for fewer. And somewhere in your eval suite, the regression that would have flagged this never runs because every test fixture has fewer than 100 records.

Pagination is not a UI affordance. It is a load-shedding primitive — and the agent that consumes a tool without it is reimplementing every SELECT * FROM orders mistake the API designers in your company spent a decade learning to avoid.

Agent Memory Drift: Why Reconciliation Is the Loop You're Missing

· 11 min read
Tian Pan
Software Engineer

The most dangerous thing your long-running agent does is also the thing it does most confidently: answer from memory. The customer's address changed last Tuesday. The ticket the agent thinks is "open" was closed yesterday by a human. The product feature the agent has tidy explanatory notes about shipped in a different shape than the spec the agent read three weeks ago. None of this is hallucination in the textbook sense — the model is recalling exactly what it stored. The world simply moved while the agent was looking elsewhere.

Most teams treat memory like a write problem: what should the agent remember, how do we summarize, what's the embedding strategy, how do we keep the store from blowing up. That framing produces architectures that grow more confident as they grow more wrong. The harder problem — the one that determines whether your agent stays useful past week three — is reconciliation: the explicit, ongoing loop that compares what the agent thinks is true against what the underlying systems say is true right now.

Agent Traffic Is Not Human Traffic: Designing APIs for Two Species of Caller

· 11 min read
Tian Pan
Software Engineer

The API you shipped two years ago was designed for a single species of caller: a person, behind a browser or a mobile client, clicking once and waiting for a response. That assumption is now wrong on roughly half of every interesting endpoint. The other half of the traffic is agents — your own, your customers', third-party integrations using your endpoints as tools — and they have different physics. They burst. They retry forever. They parallelize. They parse error strings literally. They act on behalf of a human who will not be available to clarify intent when something breaks.

Most of the production weirdness landing in postmortems this year traces back to one architectural mistake: treating both species as the same caller class. Rate limits sized for human pacing get blown apart by an agent's parallel fanout. Error messages designed to be human-readable get parsed wrong by an agent that retries forever on a 400. Idempotency assumptions that humans satisfy by default get violated when an agent retries the same payload from a recovered checkpoint. Auth logs lose the ability to distinguish "the user did this" from "the user's agent did this on the user's behalf."

The fix is not a smarter WAF or a bigger rate-limit bucket. It is a deliberate API design that names two caller classes, treats their traffic as different shapes, and records the delegation chain so accountability survives the indirection.

The Agent Undo Button Is a Saga, Not a Stack

· 10 min read
Tian Pan
Software Engineer

A user clicks "undo" on an agent action that fanned out to twelve tool calls. The agent sent two emails, created a calendar invite, updated a CRM record, charged a card, and posted to a Slack channel. Three of those operations are non-reversible by API. Two are reversible only by an inverse operation that fires its own downstream notification. The remaining seven each have their own definition of idempotency that the planner never reconciled. The undo button you shipped looks reassuring. It quietly succeeds about 60% of the time and silently fails the rest.

This is not a UX bug. It is a saga-pattern problem that distributed-systems engineers have been working on for thirty years, and ignoring that lineage is the most expensive way to discover it.

The Carbon Math of Agent Workflows: A Token Budget Is Now an ESG Disclosure

· 10 min read
Tian Pan
Software Engineer

A stateless chat completion sips electricity. A median Gemini text prompt clocks in at about 0.24 Wh; a short GPT-4o query is around 0.3–0.4 Wh. These numbers are small enough that nobody puts them on a board deck.

An agent task is not a chat completion. A typical "go research this customer and draft a reply" workflow can fan out to 30+ tool calls, 10–15 model invocations, and a context window that grows with every step. The energy cost compounds with the call graph. By the time the agent returns, you have not consumed one unit of inference — you have consumed fifty to two hundred. Suddenly the per-task footprint is in the same order of magnitude as a video stream.

That arithmetic is about to matter outside the engineering org. The EU's CSRD makes Scope 3 emissions disclosure mandatory for in-scope companies, with machine-readable iXBRL reporting required from 2026. The SEC dropped Scope 3 from its final rule, but any multinational with EU operations still has to answer the question. Procurement teams have started adding "what is the carbon footprint per user task of your AI feature?" to vendor questionnaires. Most engineering teams cannot answer it, because nobody instrumented it.

The Chargeback Ledger for Compound AI Systems

· 10 min read
Tian Pan
Software Engineer

The first time the CFO asks "what does the assistant cost us per month," the engineering team produces a number. The second time, a different team produces a different number. The third time, finance produces a third number, and somebody opens a spreadsheet that re-derives the bill from spans because nobody trusts any of the previous answers. This is the moment a compound AI system stops being an architecture problem and becomes an accounting problem.

The shape of the failure is structural. A single user request to "summarize my last quarter's customer feedback" triggers an agent owned by team A, which calls a retrieval tool maintained by team B, which calls a model hosted by provider X, which streams results back through a re-ranking tool from team C, which calls a different model from provider Y. One click; five owners; two invoices that arrive a month apart. Standard FinOps primitives — cost centers, allocation tags, account-level rollups — were designed to slice infrastructure that already had stable owners. They do not compose cleanly across an internal call graph that crosses team boundaries on every request.

The 2026 State of FinOps report puts 98% of FinOps teams on the hook for AI spend, and the same survey lists real-time visibility into AI costs as the top tooling gap. That gap is not "we cannot see the bill." The gap is "we cannot see who caused what slice of the bill, fast enough that anyone changes their behavior before the bill arrives."

Your Agent Has Two Release Pipelines, Not One

· 10 min read
Tian Pan
Software Engineer

A team I worked with shipped a "small prompt tweak" on a Wednesday afternoon. The same PR also added one new tool to the agent's registry — a convenience wrapper around an internal admin API that the prompt would now occasionally invoke. The eval suite passed. The canary looked clean. By Thursday morning a customer's billing record had been mutated by an agent acting on a prompt-injected support ticket, the audit trail showed the admin tool firing exactly as designed, and the on-call engineer's first instinct — roll back the prompt — did nothing useful, because the credential had already been used and the row had already been written.

The post-mortem framed it as a security review failure. It wasn't. It was a release-pipeline failure. The team had shipped two completely different asset classes — a behavioral nudge to the model and a new authority granted to the agent — through the same review, the same gate, and the same rollback story, as if they were the same kind of change. They aren't. And once you see them as two pipelines, most "agent governance" debates become much less mysterious.

The Ghost Employee in Your Audit Log: Agents With Borrowed Credentials Break IAM

· 10 min read
Tian Pan
Software Engineer

Pull up your SSO logs from this morning. Every Slack message, every GitHub PR, every calendar invite, every CI run, every Jira comment your AI agent produced — they all show the same thing the human-typed events show: a person's name, a session token, a green "successful authentication" line. Forensically, you have no way to tell which actions came from a human and which came from an agent the human launched and walked away from. That is the ghost employee problem, and almost every team that shipped agents in the last twelve months has it.

The shortcut that creates the problem is structural, not negligent. When you wire an agent into a tool, the easiest credential is the one already in the engineer's environment — their personal access token, their OAuth session, their device-bound SSO cookie. The alternative is a platform project: provision a first-class identity, federate it across every downstream service, wire it into the audit pipeline, build per-instance revocation. None of that ships in a sprint, and none of it shows up on a feature roadmap. So the agent borrows.

When Your CLI Speaks English: Least Authority for Promptable Infrastructure

· 12 min read
Tian Pan
Software Engineer

A platform team I talked to this quarter shipped a Slack bot that wrapped kubectl and accepted English. An engineer typed "clean up the unused branches in staging." The bot helpfully deleted twelve namespaces — including one whose name matched the substring "branch" but which happened to host a long-lived integration environment that the mobile team had been using for a week. No exception was thrown. Every individual call the bot made was a permission the bot legitimately held. The post-mortem could not point to a broken access rule, because no rule was broken. The bot did exactly what its IAM policy said it could do.

The Unix philosophy was a containment strategy hiding inside an aesthetic preference. Small tools with narrow surfaces meant that the blast radius of any single command was bounded by the verbs and flags it accepted. rm -rf was dangerous because everyone agreed it was; kubectl delete namespace required the operator to type out the namespace, and the typing was the gate. The principle of least authority was easy to enforce because authority was lexical: the shape of the command told you the shape of the action.

Then the wrappers started accepting English. Now "the shape of the command" is whatever the LLM decided it meant.