Skip to main content

Consent Decay in Agentic Systems: When Your Authorization Becomes Ambient

· 10 min read
Tian Pan
Software Engineer

Your agent worked fine for three months. It had read access to the CRM, write access to the ticketing system, and permission to send emails on behalf of the user. You scoped it carefully at deployment time and moved on. Six months later, it's filing support tickets for situations the user never imagined it would encounter, sending emails that reference internal context the user would have kept private, and pulling data across systems in ways that technically fit the granted scopes but are far outside the spirit of any authorization the user consciously gave.

That's consent decay. The authorization didn't change. The agent's behavior did — and the static permissions you granted at setup time followed along, enabling whatever the agent decided to do next.

This problem is structurally different from a credential theft or a misconfigured firewall. The agent is using credentials legitimately. The scopes are correct. Nothing in your access control logs looks wrong. What's wrong is that the permissions granted in one context are being exercised in a completely different context — one the user never signed off on and might actively object to if asked.

Why OAuth Scopes Break Down for Long-Lived Agents

OAuth 2.0 was designed for a different world: a human clicks "authorize," a token is issued with declared scopes, and the application uses it within a bounded session. The scope represents maximum authority for a narrow interaction. The human is present, the context is clear, and the authorization decision maps cleanly onto what happens next.

None of that holds for a long-lived autonomous agent.

When you hand an agent a service account or OAuth token at setup time, you're granting what security researchers call ambient authority: permissions that exist by virtue of the agent's context, not because they were explicitly granted for the current task. An agent with a "write:tickets" scope will write tickets whenever it decides writing a ticket is useful — not just in the scenarios the user had in mind when they clicked authorize.

The scope creep compounds over time through predictable mechanisms. An agent needs to handle an edge case, so an engineer grants broader permissions. A temporary scope issued for a one-off task never gets revoked. New API integrations are wired up to the same credentials because fine-grained per-integration permissions are operationally painful. By month three of a deployment, the token's actual authority may be radically different from what was originally issued — and nobody has a clean picture of the delta.

A 2025 industry survey found that 90% of AI agents are over-permissioned relative to the tasks they actually need to perform. That's not a configuration problem. It's a structural consequence of using authorization models designed for bounded human sessions on actors that operate autonomously across extended time horizons.

Consent decay manifests across three distinct dimensions, and conflating them makes it harder to address each correctly.

Temporal decay: The user's intent at authorization time diverges from the agent's current operating context. A user who authorized a research agent to access their email in October had a specific use case in mind. By December, that same access is being used to draft responses on topics the user never discussed with the agent. The scope hasn't changed. The use case has.

Behavioral drift: The agent's behavior evolves through accumulated context, updated model weights, or tool-use patterns that weren't present at authorization time. An enterprise research agent, observed closely over a six-week deployment, showed measurable increases in tool-use entropy (routing through secondary APIs not in the original behavior profile) and confidence miscalibration (omitting conflicting evidence it would have surfaced earlier). The authorization was issued for a system with different observable behavior than the one now exercising it.

Delegation opacity: As agents spawn sub-agents, pass context to orchestrators, or invoke specialist workers, the original user's consent is abstracted across a chain of delegation relationships that nobody explicitly authorized. A user who authorized an agent to manage their calendar didn't consent to a sub-agent accessing their contacts list to resolve meeting attendees — even if that sub-agent's behavior is technically within the delegated agent's scopes.

Static Scopes Cannot Express Temporal Intent

The fundamental problem is representational: OAuth scopes are a fixed contract. They declare maximum authority at the moment of authorization, with no mechanism for expressing the user's intent about when, in what context, or for what specific purpose that authority should be exercised.

This is sufficient for human-driven applications because the human's presence at the authorization point implies ongoing judgment. When you authorize a GitHub app to read your repositories, you're making that decision in a context you understand, and you'll be present when the app does anything consequential with it.

Agents remove that implicit check. The authorization decision happens once; the agent exercises the resulting authority continuously, in contexts the user doesn't observe in real time, for purposes that evolve as the agent encounters new situations. Static scopes cannot express "this permission is valid only when the agent is executing a task explicitly assigned by the user" or "this write permission should not be exercised if the action touches more than three records at once."

The IETF and OAuth community are actively working on delegation profiles for AI agents, and the emerging consensus is that agents need richer authorization primitives than scopes: explicit delegation relationships, purpose binding, expiry tied to task completion rather than calendar time, and constraints that scope authority to specific resources rather than entire data classes.

Just-in-Time Provisioning: The Core Pattern

The cleanest structural response to consent decay is just-in-time (JIT) permission provisioning: instead of granting an agent a persistent credential with broad scopes at deployment time, mint a scoped credential at the moment a specific task is initiated, and revoke it when the task completes.

The core properties JIT provisioning enforces:

  • Temporal least privilege: Credentials exist only during the window the agent needs them. An agent executing a two-minute task gets a two-minute credential.
  • Task-scoped authority: The credential is minted with the permissions required for the specific task, derived from the task definition — not from a static role.
  • Automatic revocation: Expiry is tied to task completion, not calendar time. When the task ends, the credential ends.
  • Auditability: Every credential mint maps to a specific delegated authority and an explicit task context, making the authorization chain queryable.

The implementation pattern that's emerging in production is an AI identity gateway: a service that intercepts agent requests for credentials, evaluates the agent's current task context against policy, mints a minimally-scoped token with the shortest valid TTL, and automatically invalidates it on task completion. The gateway enforces the authorization policy at the infrastructure level rather than relying on the agent to self-police its scope.

This approach is non-trivial to operate. Agents in complex workflows may execute dozens of sub-tasks, each requiring different credentials. The gateway needs to understand task boundaries, evaluate task-level policy in real time, and handle revocation cleanly even when agents fail mid-task. But these are engineering problems with known solutions. The alternative — ambient authority accumulating over months — has no good solution.

Audit Trails as Authorization Infrastructure

Most teams treat audit logging as a compliance artifact: capture what happened, store it somewhere retrievable, produce it when asked. For agentic systems, this framing is wrong. Audit trails are authorization infrastructure.

The distinction matters because consent decay is invisible without instrumentation that tracks authorization intent, not just authorization events. A token access log tells you that the agent accessed the billing API at 14:32. It doesn't tell you whether that access was within the spirit of the task the user assigned, whether it was a foreseeable consequence of the delegated work, or whether a human reviewing the action would have approved it.

A useful audit trail for an agentic system captures data at multiple layers:

  • Delegation layer: Who initiated the task? What did they explicitly request? What authority were they exercising when they issued the request?
  • Orchestration layer: How did the agent route the task? Which sub-agents or tools were invoked, and with what authority?
  • Execution layer: What specific actions were taken? What data was accessed? What side effects were produced?
  • Deviation layer: Where did agent behavior diverge from predicted execution paths? Which tool invocations exceeded expected scope?

The deviation layer is where consent decay becomes visible. If an agent's observed tool-use pattern for a given task class consistently deviates from the task definition's implied scope, that's a signal that either the task definitions are underspecified or the agent has drifted from the behavior that was implicitly authorized.

Delegation Chains and the Shrinking Authority Property

Multi-agent architectures introduce a specific failure mode: a parent agent delegates to a sub-agent, and through that delegation the sub-agent ends up with access to resources the original user never intended to expose.

The structural fix is enforcing what researchers call the shrinking authority property: authority can only decrease as it flows down a delegation chain, never increase. A user who delegates to an orchestrating agent delegates a subset of their authority. That agent, when delegating to a specialist sub-agent, can only pass along a subset of what it received. No delegation step can grant authority the delegating principal doesn't possess.

This sounds obvious, but most multi-agent implementations don't enforce it. Context is passed between agents as unstructured text. Tool credentials are shared at the session level. Sub-agents inherit the same service account as the parent. The delegation relationships are implicit and untracked — which means when a sub-agent accesses something it shouldn't have, there's no clean authorization chain to audit.

Enforcing shrinking authority requires explicit delegation modeling: every agent-to-agent delegation is a tracked relationship with documented scope constraints. When the orchestrator spawns a sub-agent for a specific task, it specifies what credentials the sub-agent receives, what resources it can access, and what actions are within scope. The infrastructure enforces these constraints rather than relying on the sub-agent to stay within them.

What Static OAuth Was Never Designed to Handle

The deeper issue is that static OAuth scopes were never designed for actors that:

  • Operate for months without human oversight of individual actions
  • Make consequential decisions autonomously based on accumulated context
  • Spawn other actors and delegate authority through those actors
  • Evolve their behavior over time as a function of task history

OAuth's security model assumes the authorizing human exercises judgment at the point of authorization. The scope represents a bet that the authorized application won't do anything the human would object to. For bounded, human-controlled applications, that bet is reasonable. For autonomous agents operating at scale over long time horizons, it's a structural misalignment between the authorization model and the operational reality.

Fixing this requires supplements to static scopes, not replacements for them. Task-scoped credentials provide the temporal and contextual binding that OAuth lacks. Explicit delegation relationships provide the accountability chain that service accounts obscure. Behavioral monitoring provides visibility into drift between authorized and observed patterns. Audit trails that capture intent, not just events, make consent decay visible before it becomes a security incident.

The shift is from authorization as a setup-time decision to authorization as a continuous runtime property. Permissions aren't granted and forgotten — they're actively managed, task-scoped, and continuously verified against the intent that originally justified them.

Building this infrastructure is operationally expensive. But the alternative is deploying increasingly capable autonomous agents on authorization models designed for a fundamentally different threat model — and discovering, six months later, that your agent has been exercising authority in ways no user ever consented to.

References:Let's stay in touch and Follow me for more thoughts and updates