Skip to main content

4 posts tagged with "iam"

View all tags

Agent IAM Is Not Service IAM: Why OAuth Breaks When Intent Is Constructed at Runtime

· 12 min read
Tian Pan
Software Engineer

The bearer token model has one assumption that agents quietly violate: the caller knows what they want when they ask. OAuth scopes, IAM roles, and API keys are all designed around a principal whose intent is fixed before authentication begins. Your CI runner has stable intent. Your microservice has stable intent. An agent does not. An agent's intent is assembled at request time out of a user prompt, a system prompt, retrieved documents, and the outputs of tools that may themselves have been written by an attacker. By the time the agent reaches for a token, the policy decision that the IAM layer has to make has already been made — by inputs the IAM layer never saw.

This is why the same auth pattern that has worked for fifteen years of service-to-service traffic is now producing a class of incidents nobody has good language for. A prompt injection lifts a long-lived bearer token. An agent "remembers" a permission across sessions because the token outlived the user's intent. A multi-step task that legitimately needs three scopes holds all of them for the entire session instead of acquiring and releasing them per step. None of these are OAuth bugs in the strict sense. They are consequences of stretching a model that assumes static intent to cover a caller whose intent is reconstructed every turn.

Agent Credential Blast Radius: The Principal Class Your IAM Model Never Enumerated

· 11 min read
Tian Pan
Software Engineer

The security org spent a decade killing off the "service account that can do everything." Scoped tokens, short-lived credentials, JIT access, per-action audit — the whole least-privilege playbook landed and stuck. Then the AI team wired up an agent, the prompt asked for a tool catalog, and the engineer requested the broadest OAuth scope the platform would issue. The deprecated pattern is back, wearing new clothes, and this time the principal calling the API is a stochastic loop nobody is sure how to scope.

The agent has read-write on the calendar, the file store, the CRM, and the deploy pipeline because the API surface couldn't be enumerated up front. The token is long-lived because no one wired the refresh path. The audit log records the bearer, not the action. And IAM owns human and service identity, the platform team owns workload identity, the AI team owns the agent's effective permissions, and the union of those three sets is owned by no one.

The Ghost Employee in Your Audit Log: Agents With Borrowed Credentials Break IAM

· 10 min read
Tian Pan
Software Engineer

Pull up your SSO logs from this morning. Every Slack message, every GitHub PR, every calendar invite, every CI run, every Jira comment your AI agent produced — they all show the same thing the human-typed events show: a person's name, a session token, a green "successful authentication" line. Forensically, you have no way to tell which actions came from a human and which came from an agent the human launched and walked away from. That is the ghost employee problem, and almost every team that shipped agents in the last twelve months has it.

The shortcut that creates the problem is structural, not negligent. When you wire an agent into a tool, the easiest credential is the one already in the engineer's environment — their personal access token, their OAuth session, their device-bound SSO cookie. The alternative is a platform project: provision a first-class identity, federate it across every downstream service, wire it into the audit pipeline, build per-instance revocation. None of that ships in a sprint, and none of it shows up on a feature roadmap. So the agent borrows.

AI Agent Permission Creep: The Authorization Debt Nobody Audits

· 10 min read
Tian Pan
Software Engineer

Six months after a pilot, your customer data agent has write access to production databases it hasn't touched since week one. Nobody granted that access maliciously. Nobody revoked it either. This is AI agent permission creep, and it's now the leading cause of authorization failures in production agentic systems.

The pattern is straightforward: agents start with a minimal permission set, integrations expand ("just add read access to Salesforce for this one workflow"), and the tightening-after-deployment step gets deferred indefinitely. Unlike human IAM, where quarterly access reviews are at least nominally enforced, agent identities sit entirely outside most organizations' access review processes. The 2026 State of AI in Enterprise Infrastructure Security report (n=205 CISOs and security architects) found that 70% of organizations grant AI systems more access than a human in the same role. Organizations with over-privileged AI reported a 76% security incident rate versus 17% for teams enforcing least privilege — a 4.5x difference.