When our compliance team first asked about our AI agent governance framework in Q4 2025, we didn’t have one. We had agents in production, we had MCP servers connecting them to internal tools and databases, but we had zero formal governance around any of it. When the SOC 2 auditor asked “how do you control what your AI agents can access?” the honest answer was “we trust them.”
That’s not a good answer. Here’s how we went from “we trust them” to a governance framework, and what I’ve learned about the emerging compliance landscape for MCP infrastructure.
The Compliance Wake-Up Call
Let me be direct about what triggered this. Three things happened in close succession:
- Our SOC 2 Type II auditor flagged that AI agents with access to customer data weren’t covered by our existing access control documentation
- The EU AI Act high-risk rules (hitting August 2, 2026) require conformity assessments for AI systems used in employment, credit, and critical infrastructure, and we couldn’t even inventory which AI systems we had in production
- A competitor disclosed a data incident where an AI agent accessed customer records outside its intended scope
None of these were theoretical. Each one represented a concrete risk that our board, our customers, and our regulators cared about.
What SOC 2 Auditors Are Actually Asking
I’ve now been through two SOC 2 audit cycles since we started deploying AI agents. Here’s what the auditors are focused on:
Access Control (CC6.1-6.3): How do you restrict AI agent access to information? What’s the authorization model? How do you ensure agents only access data they’re supposed to?
Monitoring (CC7.1-7.3): How do you detect unauthorized or anomalous agent behavior? What alerts exist? Who reviews them?
Change Management (CC8.1): When you deploy a new agent or modify an existing agent’s capabilities, what’s the review process? Who approves granting an agent access to a new MCP tool?
Risk Assessment (CC3.1-3.4): Have you assessed the risks of AI agent operations? Do you have documented risk acceptance for autonomous agent actions?
The auditors don’t understand MCP specifically (yet), but they understand the control framework. If you can’t explain how your AI agents are governed within that framework, you have a finding.
The MCP Gateway as Compliance Infrastructure
This is where MCP gateways become strategic, not just technical. A properly configured MCP gateway gives you:
Centralized access control documentation: Instead of “Agent X has a hardcoded API key to MCP Server Y,” you can show “Agent X authenticates via OAuth, is authorized for Tools A, B, and C on Server Y, and cannot access Tools D and E.”
Complete audit trails: Every MCP call, every tool invocation, every data access, logged with agent identity, user context, timestamp, and response. This is what auditors want to see.
Enforceable policies: Rate limits, data classification enforcement, PII detection in MCP responses. These become controls you can point to in your SOC 2 documentation.
Change management integration: Adding a new tool to an agent’s allowlist becomes a documented change with approval workflows, rather than someone editing a config file.
The SOC 2 Certified Gateway Market
MintMCP is currently the only MCP gateway with SOC 2 Type II certification. This matters because in regulated industries, using a SOC 2 certified gateway significantly simplifies your own audit. The vendor’s certification covers infrastructure controls that you’d otherwise need to document and test internally.
However, I want to push back on the idea that buying a certified gateway solves the compliance problem. It doesn’t. The gateway is a control, but the governance framework around it, who approves agent access, how you review agent behavior, how you respond to incidents, that’s organizational work that no vendor can do for you.
What We Built
Our governance framework for AI agents has four layers:
Layer 1 - Agent Registry: Every AI agent has a documented entry with its purpose, owner, data access requirements, and risk classification. Before an agent goes to production, it goes through a review that looks a lot like our access review process for human employees.
Layer 2 - MCP Gateway Policies: Tool-level authorization, rate limiting, PII detection, and anomaly alerting. We use a commercial gateway for external-facing agents and custom middleware for internal ones.
Layer 3 - Continuous Monitoring: We review agent behavior weekly. What tools are they calling? What data are they accessing? Are there patterns that suggest scope creep? This is the equivalent of periodic access reviews for human users.
Layer 4 - Incident Response: What happens when an agent does something unexpected? We have a runbook that covers agent isolation, credential rotation, audit log review, and root cause analysis.
The Cost Question
I won’t sugarcoat this: the compliance layer adds cost. We estimate we’re spending roughly 15% of our AI infrastructure budget on governance, monitoring, and compliance. Some of my peers think that’s too high. I think it’s insurance.
The alternative is what I’ve seen at companies without governance: agents with broad, undocumented access to production systems, no audit trail, and a prayer that nothing goes wrong. When something does go wrong, and with Shadow Escape-class vulnerabilities it’s a matter of when, the cost of incident response without an audit trail will dwarf whatever you spent on governance.
Where This Goes Next
The EU AI Act deadline in August 2026 is going to force this conversation for every company shipping to European customers. You’ll need to demonstrate that your AI systems, including your agents and their MCP infrastructure, have appropriate governance. Companies that start now will have a competitive advantage. Companies that wait will be scrambling.
For CTOs reading this: put AI agent governance on your Q2 roadmap if it’s not already there. Start with the agent registry. It’s the lowest-effort, highest-value step, and it will immediately surface risks you didn’t know you had.
How are other leaders approaching this? Is anyone else finding that their existing SOC 2 controls map surprisingly well (or poorly) to AI agent governance?