For the past decade, APIs have been designed with a fundamental assumption: a human is on the other side. A developer reads documentation, writes integration code, handles edge cases, and monitors behavior. That assumption is breaking down fast.
In 2026, AI agents are becoming first-class API consumers, and two protocols are emerging as the infrastructure layer for this shift: Anthropic’s Model Context Protocol (MCP) and the AsyncAPI specification. If you’re building APIs today and not thinking about agent consumers, you’re designing for yesterday.
The N×M Problem That MCP Solves
Before MCP, every AI integration was custom. Want your LLM to read from Slack? Write a connector. Want it to query your database? Write another connector. Want it to create GitHub issues? Another one. Every tool, every data source, every service needed bespoke glue code.
This created an N×M integration problem - N models times M tools, each requiring unique wiring. OpenAI’s function-calling API and ChatGPT plugins tried to solve this, but they were vendor-specific.
MCP flips the model entirely. Think of it as USB-C for AI - a single, standardized protocol that any AI model can use to connect with any compatible tool.
How MCP Works
MCP is built on JSON-RPC 2.0 and borrows architectural ideas from the Language Server Protocol (LSP). The key insight: instead of the client knowing everything about the server, the server advertises its capabilities dynamically.
Traditional API:
Client must know: endpoints, parameters, auth, schemas
Client calls: specific endpoints with specific payloads
MCP:
Server advertises: available tools, resources, prompts
Client discovers: capabilities at runtime
Client invokes: tools dynamically based on context
This is introspection for APIs, designed specifically for AI agents. The agent doesn’t need hardcoded knowledge of every tool - it can discover what’s available and invoke it contextually.
AsyncAPI: The Event-Driven Side
MCP handles the request-response pattern well, but agents also need to react to events in real-time. This is where AsyncAPI comes in.
AsyncAPI is the industry standard for defining asynchronous, event-driven APIs. While REST (OpenAPI) and GraphQL handle synchronous request-response, AsyncAPI defines how systems communicate via message brokers, event streams, and pub/sub patterns.
For AI agents, this matters because:
- Agents need to subscribe to events (“notify me when a deployment fails”)
- Agent-to-agent communication is inherently asynchronous
- Real-time workflows require event-driven triggers, not polling
The TM Forum recently adopted AsyncAPI for telecommunications, and IBM has been investing heavily in the specification. Confluent’s research shows that 80% of AI project challenges are integration problems, not AI problems - and event-driven architecture is the answer to scaling agent communication.
The Emerging Agent Infrastructure Stack
Three protocols are converging to form the infrastructure layer for agentic AI:
| Protocol | Purpose | Pattern |
|---|---|---|
| MCP | Tool discovery & invocation | Request-Response |
| AsyncAPI | Event-driven communication | Pub/Sub, Streaming |
| A2A (Google) | Agent-to-agent coordination | Peer-to-Peer |
With Apache Kafka as the event broker underneath, this stack enables truly decoupled, scalable, multi-agent systems. Instead of point-to-point API calls, agents communicate through events, discover tools dynamically, and coordinate with each other through standardized protocols.
Industry Adoption Is Real
This isn’t vaporware. MCP adoption has been remarkable:
- OpenAI adopted MCP in March 2025
- Google DeepMind announced official MCP support for Google services
- Block, Apollo, Replit, Sourcegraph integrated early
- In December 2025, Anthropic donated MCP to the Linux Foundation (Agentic AI Foundation)
- Tens of thousands of community-built MCP servers exist today
- SDKs available in Python, TypeScript, C#, and Java
A Forum Ventures survey found 48% of senior IT leaders are prepared to integrate AI agents into operations, with 33% saying they’re very prepared.
What This Means for API Designers
If you’re building APIs today, here’s what changes:
- Machine-readable capability discovery - Your API needs to describe what it can do, not just how to call it
- Semantic tool descriptions - Agents understand natural language; your API metadata should too
- Granular permissions - Agents need scoped access, not all-or-nothing API keys
- Rate limiting for agents - An agent can make 1000 calls in the time a human makes 1; plan accordingly
- Audit trails - When an agent takes an action, you need to trace the decision chain
The Uncomfortable Question
Here’s what keeps me up at night: AI agents will multiply API consumption, not replace it. Every agent interaction could trigger dozens of API calls across multiple services. Are your APIs ready for 10x the traffic from non-human consumers who don’t read error messages?
What’s your experience with MCP or agent-ready API design?
- Are you building MCP servers for your internal tools?
- How are you thinking about agent authentication and permissions?
- Is AsyncAPI on your radar for event-driven agent communication?