I’ve been in technology leadership for 25 years, and I’ve watched every major “shadow” wave hit enterprise engineering: shadow IT, shadow SaaS, shadow cloud accounts. Each time, the pattern is the same—teams move fast to solve immediate problems, governance catches up 18-24 months later, and the cleanup bill is enormous.
Agent sprawl is that wave right now, and it’s moving faster than anything before it.
The Numbers Should Alarm You
According to Gravitee’s State of AI Agent Security 2026 report, more than 3 million AI agents are now operating within corporations. Only 47.1% are actively monitored or secured. That leaves an estimated 1.5 million agents running without oversight—accessing sensitive data, making decisions, and connecting to critical systems with no audit trail.
Let that sink in: 29% of enterprise agents operate with zero oversight whatsoever. Not minimal oversight. Zero.
And this isn’t a security team problem alone. Gartner’s Predicts 2026 report explicitly warns organizations to “secure AI agents to avoid ungoverned sprawl and abuses.” The average organization now manages 37 deployed agents, and each one is an unmapped access path.
Why Traditional RBAC Won’t Save You
Here’s what keeps me up at night as a CTO: our existing identity and access management infrastructure was designed for human users. Static role-based access control assigns permissions based on job titles and team membership. But AI agents don’t have job titles. They don’t clock out. They operate at machine speed across system boundaries that humans would never cross in a single session.
The industry is starting to recognize this. Microsoft just released their Agent Governance Toolkit—an open-source project addressing all 10 OWASP agentic AI risks with sub-millisecond policy enforcement. Cisco announced a full security reimagining for what they call the “agentic workforce.” Solutions like Ory’s Keto are moving toward relationship-based access control (ReBAC) because static RBAC is fundamentally inadequate for autonomous agents.
But how many of your engineering orgs have actually implemented any of this? Be honest.
The Organizational Blind Spot
What I find most concerning is the organizational dimension. 80% of organizations report risky behaviors from their AI agents—unauthorized data access, unexpected system interactions—but only 21% have mature governance models in place. Only 24.4% of organizations have full visibility into which agents are communicating with each other.
This means your engineering teams are deploying agents that talk to other agents, access production data, and make automated decisions… and three-quarters of you don’t even know which agents are talking to which.
With EU AI Act high-risk enforcement arriving August 2, 2026, regulators aren’t asking for policy documents anymore. They want runtime operational evidence. If you can’t demonstrate governance at the agent layer, you’re looking at real liability.
What I’m Doing About It (And Where I’m Stuck)
At my company, we’ve started treating every AI agent as a first-class identity in our IAM system—same as a human employee or a service account, but with agent-specific constraints:
- Agent inventory: Every deployed agent must be registered with its purpose, data access scope, and owning team
- Scoped permissions: Agents get the minimum access needed, reviewed quarterly (not the broad API keys we used to hand out)
- Runtime monitoring: We’re instrumenting agent-to-agent communication to build an interaction graph
- Kill switches: Every agent has a human-accessible emergency stop that doesn’t require the deploying team
But I’ll be transparent—we’re maybe 40% of the way there. The biggest challenge isn’t technical. It’s cultural. Engineers resist governance because it feels like friction. Product teams want to ship agent-powered features yesterday. And the new roles we need—orchestration engineers, responsible AI engineers—are roles we’re still figuring out how to hire for and where to place in the org chart.
Questions for This Community
- Do you have an agent inventory? Can you tell me right now how many AI agents are running in your production environment and what data they access?
- Who owns agent governance in your org? Is it security? Platform engineering? A new dedicated team? Nobody?
- How are you handling the RBAC gap? Are you extending existing IAM, or building something agent-native?
- What’s your EU AI Act readiness for agent deployments specifically?
I suspect most honest answers will be uncomfortable. That’s okay—that’s exactly where productive conversations start.
The cost of getting this wrong is $4.6M per breach on average, and we’re all sitting on unmapped blast radii. I’d rather have the uncomfortable conversation now than the incident retrospective later.