We’re deploying AI agents faster than we can secure them. Here’s the uncomfortable truth from the latest State of AI Agent Security report: 81% of engineering teams are past the planning phase with AI agents, yet only 14.4% have full security approval. That’s not a small gap—that’s a governance crater.
I’m leading digital transformation at a Fortune 500 financial services company, and this is keeping me up at night. We have teams spinning up AI coding assistants, data analysis agents, customer service bots—all useful, all moving fast. But when our CISO asks, “Who has access to what data? What can these agents actually do? How do we audit them?”—the answer is usually silence.
The Identity Problem We’re Ignoring
Here’s what most organizations do: they treat AI agents as extensions of human users (Agent uses Alice’s credentials) or as generic service accounts (Agent uses api-bot-123). According to the research, only 21.9% of teams treat AI agents as independent, identity-bearing entities with their own access controls.
This worked when we had 5 agents. It doesn’t work when we have 500.
Shadow AI Is Real
The data gets worse: the majority of agents are being deployed at the departmental or team level, bypassing official security vetting entirely. Sound familiar? It’s the same pattern that gave us Shadow IT a decade ago, except now the “shadow applications” can read your codebase, access customer data, and make decisions autonomously.
57.4% of builders cite lack of logging and audit trails as a primary obstacle. Translation: we’re shipping AI agents we can’t audit.
Why This Matters (Especially in Regulated Industries)
June 2026: Colorado’s Artificial Intelligence Act (CAIA) takes effect, mandating disclosure requirements for AI systems that interact with consumers.
Right now: SOX compliance requires controls over systems that influence financial reporting. If an AI agent can modify financial data, access controls, or impact reporting flows—congratulations, you have a SOX-relevant internal control risk.
When our CFO asked, “If we can’t prove what our AI agents did, how do we pass audit?”—that’s when this went from engineering concern to executive priority.
RBAC Alone Isn’t Enough
Traditional role-based access control assumes you know who (or what) is accessing systems and can assign them to predefined roles. But AI agents:
- Can spawn other agents
- Have dynamic, context-dependent permission needs
- Require behavior and intent-based analysis, not just identity verification
- Cross boundaries between systems in ways users don’t
The security model needs to shift from “who are you?” to “who are you, what are you trying to do, and does that behavior pattern make sense?”
The Platform Team Question
Platform teams built IAM systems for humans. Service account management for applications. Now we need something new: AI agent identity governance that includes:
- Unified inventory: Track human and non-human identities in one place
- Agent-specific RBAC: Permissions that understand agent capabilities and risks
- Quota management: Prevent runaway agent usage (cost and security)
- Audit trails: Every action logged with agent identity and context
- Lifecycle management: Joiner-mover-leaver processes for agents, not just humans
- Behavioral analysis: Flag anomalous agent activity, not just credential misuse
Microsoft and OpenAI are adding governance tools directly into their platforms. The build vs. buy decision is coming fast.
My Question for This Community
Should platform teams build AI agent governance infrastructure now—or wait for industry standards to emerge?
On one hand: We’re early. Standards will evolve. Building now means rebuilding later.
On the other hand: 88% of organizations have already experienced confirmed or suspected security incidents related to AI agents this year. The breach might come before the standard.
What are others doing? Are you treating agents as users? Service accounts? Building custom identity systems? Waiting for vendors to solve it?
And the harder question: How do you implement governance that enables speed rather than blocking it?
I don’t have all the answers. But I know we can’t keep deploying agents at scale while treating identity and access control as an afterthought.
Looking forward to hearing how others are tackling this.