Your platform engineering team just spent six months building an internal developer portal with RBAC, quotas, and governance policies for human engineers. Last week, three different teams deployed AI agents that bypass all of it.
The agents authenticate with static API keys shared across services. They inherit broad permissions from the systems they connect to. Nobody knows what they’re accessing, who “owns” them, or how to revoke their access when something goes wrong.
This isn’t a hypothetical. According to recent research, while 80.9% of technical teams have moved AI agents into active testing or production, only 14.4% report all agents going live with full security/IT approval. The identity crisis is real: only 18% of security leaders are highly confident their current IAM systems can effectively manage agent identities.
The Human-Centric Architecture Trap
Our IAM infrastructure was designed with humans in mind:
- Users have email addresses and can reset passwords
- Role-based access control assumes humans understand their job responsibilities
- Session timeouts protect against forgotten logins
- Audit trails track who did what for compliance
But AI agents don’t fit this model:
- They don’t have email addresses — they authenticate via API keys, service accounts, or OAuth tokens
- RBAC grants overly broad permissions — defining granular, task-specific access for dynamic agents is operationally complex
- They don’t “forget” to log out — agents run continuously and need persistent access
- Audit trails show “service account” activity — not which agent, which model version, or which human authorized it
When 44% of agents use static API keys, 43% use username/password combinations, and 35% rely on shared service accounts, we’re essentially treating them like legacy batch jobs from 2010. The infrastructure hasn’t caught up to the reality that these are autonomous, decision-making entities that need their own identity category.
The Governance Gap: When Adoption Outpaces Control
Here’s the uncomfortable truth: platform engineering teams are being bypassed. Teams deploy agents using their existing credentials, cloud IAM roles, or developer accounts. The agents “just work” — until they don’t.
The practical consequences:
- No inventory — You can’t govern what you can’t see. How many agents are running? Which systems are they accessing?
- No ownership — When an agent misbehaves, who’s responsible? The developer who deployed it? The team that owns the model? The security team that should have caught it?
- No boundaries — Most agents inherit broad permissions from the systems they connect to, with no zero-trust boundaries governing what they can actually reach
- No audit trail — When an incident happens, you see a service account made 10,000 API calls. But which agent? For which task? Authorized by whom?
Research shows 40% of organizations are increasing identity and security budgets specifically to address AI agent risks, while 34% have established dedicated budget lines for agent governance. The market is signaling this is a real problem.
Three Approaches to Agent Identity
The industry is converging on treating agents as first-class identity primitives, not as users or services. Here’s what that looks like:
1. Agent Identity Gateways
Purpose-built infrastructure that sits between agents and your platform:
- Dynamic authentication — On-behalf-of (OBO) token exchange instead of static keys
- Runtime authorization — Policy evaluation at request time, not role assignment at deployment time
- Continuous traceability — Every agent action logged with model version, prompt context, and human authorizer
- Unified orchestration — Single control plane for all agent identities across your infrastructure
2. Fine-Grained Authorization (FGA)
Extending beyond traditional RBAC to handle hierarchical, resource-scoped access:
- Not just “Can this agent access the database?” but “Can this agent query customer table filtered to accounts it’s been assigned?”
- Context-aware permissions based on agent task, time of day, data sensitivity
- Temporary permission grants with automatic expiration
3. Infrastructure-Level Guardrails
Default safe by design:
- Mandatory inventory — Agents can’t deploy without registering identity, purpose, owner
- Default deny boundaries — Agents get minimal permissions by default, must request escalation
- Automated alerts — New identities without defined owners trigger security review
- Graduated policies — Low-risk agents get streamlined approval, high-risk agents require security sign-off
The Question Platform Engineering Teams Need to Answer
If your CI/CD pipeline blocks deployments that fail security scans, should it also block agents that exceed permission boundaries?
Cisco, Strata, and others are betting the answer is yes. The emerging pattern: preventive controls that treat agent deployment like code deployment — with gates, approvals, and automated policy enforcement.
But this raises uncomfortable questions:
- Who sets the permission thresholds? Security? Platform? Individual teams?
- How do you balance innovation velocity with governance requirements?
- What’s the appeals process when a legitimate agent is blocked?
- How do you avoid creating a bureaucratic bottleneck that teams route around?
What We’re Doing (And What We’re Still Figuring Out)
We’re three months into implementing agent identity governance at our company. Here’s our current approach:
What’s working:
- Required agent registration in our service catalog (lightweight form: purpose, owner, system access)
- Integrated agent identity checks into our existing IAM review process
- Created an “Agent” identity type in our RBAC system with tighter default permissions
What we’re still figuring out:
- How to trace agent behavior back to specific model versions and prompts (our audit logs show “agent_id” but not enough context)
- Whether to build custom infrastructure or adopt a vendor solution (we’re evaluating Strata, WorkOS FGA, and Curity)
- How to handle agents that need cross-system orchestration (do they get one identity per system or federated identity?)
- The organizational question: Should platform, security, or individual teams own agent governance?
The Bigger Question
Are we ready for non-human workers with the same access privileges as senior engineers?
Because that’s what we’re building. These aren’t scripts. They’re autonomous systems that make decisions, access sensitive data, and take actions with real business consequences.
Our IAM systems were designed for a world where humans were the only actors. That world is gone. The question is whether we update our infrastructure to match reality — or keep pretending agents are just another API integration.
What’s your organization doing about agent identity? Are you treating them as users, services, or something new entirely?