I’ve spent the last quarter watching our customers’ expectations shift in a way I haven’t seen since we went cloud-native. They’re not asking “can your platform support AI features?” anymore. They’re asking “can your platform treat AI agents like users?”
That question hit differently when our enterprise customer’s agent made 47,000 API calls in 3 hours last Tuesday. No quota. No rate limit. No clear owner. Just a runaway cost bill and a support ticket asking “whose agent is this?”
The Identity Crisis We’re Not Talking About
Gartner projects 40% of enterprise applications will embed task-specific AI agents by 2026, up from less than 5% in 2025. We’re not talking about ChatGPT wrappers—we’re talking about agents that deploy code, modify databases, and orchestrate entire subsystems autonomously.
But here’s the uncomfortable reality: only 18% of security leaders are confident their current IAM systems can effectively manage agent identities. The rest? They’re using workarounds:
- 44% use static API keys
- 43% use username/password combinations
- 35% rely on shared service accounts
This is the 2026 equivalent of sharing your admin password with the team.
What “First-Class Platform Citizens” Actually Means
When platform teams say they’ll treat agents as first-class citizens, what does that concretely mean? Based on emerging practices at RSA 2026, here’s the bar:
1. Identity Management (Not Just API Keys)
Every agent gets a managed identity—Microsoft calls this Entra Agent ID. Think of it like giving every agent its own employee badge with scoped access, not just handing out master keys.
2. RBAC Permissions Like Any User Persona
Agents need role-based access controls. Your CI/CD agent shouldn’t have the same permissions as your customer support agent. Agentic RBAC moves from static permissions to context-aware, dynamic authorization.
3. Resource Quotas and Cost Governance
The Tuesday incident? Solved with quotas. Agent gets 10,000 API calls/hour, $50 compute budget/day. Hard stops, not guidelines. Platform teams are implementing FinOps for agents just like they did for cloud resources.
4. Audit Trails and Accountability
When an agent fails, who’s responsible? The engineer who wrote it? The team that deployed it? The PM who prioritized it? Governance frameworks require clear ownership—“agents without owners” is the #1 risk RSA identified.
The Product Implications Nobody’s Pricing Yet
From a product perspective, this isn’t just infrastructure work—it’s customer trust and unit economics:
Customer Trust: “Where did your AI agent learn about our data?” becomes the new security questionnaire question. Explainable AI and audit trails aren’t nice-to-haves; they’re table stakes for enterprise sales.
Cost Predictability: You can’t run a SaaS business when agents blow through compute budgets unpredictably. Agent quotas are the new seat-based pricing—customers expect transparency on what agents cost.
Support and Debugging: When customers report “your AI did something wrong,” can you trace it? Agent audit logs are your new support runbooks.
The Hard Question for Platform Teams
Most internal developer platforms were designed for human developers. The assumptions are everywhere:
- Login flows expect humans with browsers
- Rate limits assume human typing speed
- RBAC expects org charts and managers
- Audit logs assume “who” is a person
What happens when non-human actors generate more API calls than humans? By Q2 2026, AI agents were already generating more platform API calls than developers at several organizations tracked by platform engineering researchers.
Your platform needs to answer:
- Identity: How do agents authenticate? (Not static keys)
- Authorization: What can each agent do? (Not “admin” roles)
- Accountability: Who owns this agent? (Not “the team”)
- Economics: What does this agent cost? (Not “overhead”)
- Observability: What is this agent doing right now? (Not “black box”)
Where’s Your Organization on the Maturity Curve?
I’ve been using this rough framework with customers:
Level 0 - Unaware: “We don’t really have agents” (but engineers are deploying them anyway)
Level 1 - Ad Hoc: API keys in .env files, no governance, no visibility
Level 2 - Reactive: We know agents exist, we track them in a spreadsheet, incidents drive policy
Level 3 - Governed: Agent registry, RBAC, quotas, ownership model, but manual provisioning
Level 4 - Platform-Native: Self-service agent creation with security by default, automated compliance, cost attribution
Level 5 - AI-Native: Agents manage agents, dynamic optimization, agentic infrastructure
Most orgs I talk to are at Level 1-2. The platform engineering community is pushing toward Level 3-4 as the 2026 baseline.
The 2026 Reality Check
If you’re building an internal developer platform or selling infrastructure, treating AI agents as first-class citizens is no longer a roadmap item—it’s a customer expectation.
The question isn’t “should we?” It’s “how fast can we migrate from API keys to proper identity without breaking production?”
For folks building platforms or managing infrastructure: where are you on this journey? What’s your biggest blocker—technical architecture, organizational ownership, or just prioritization against other roadmap items?
I’m especially curious: how are you explaining agent cost management to customers who are used to seat-based pricing?