Agent Protocol Fragmentation: Designing for A2A, MCP, and What Comes Next
Most teams picking an agent protocol are actually making three separate decisions at once — and treating them as one is why so many integrations break the moment a second framework enters the picture.
The three decisions are: how your agent talks to tools and data (vertical integration), how your agent collaborates with other agents (horizontal coordination), and how your agent surfaces state to a human interface (interaction layer). Google's A2A, Anthropic's MCP, and OpenAPI-based REST solve for different layers of this stack. When engineers conflate them, they either over-engineer a single-agent setup with multi-agent machinery, or under-engineer a multi-agent workflow with single-agent tooling. Both failures are expensive to refactor once in production.
The Three Layers That Protocols Actually Solve
Before comparing protocols, you need a mental model of what the agent interface stack looks like.
The tool integration layer is where an agent reaches outward to databases, APIs, and external services. This is the domain of MCP. An agent needs to read from a vector store, call a weather API, or invoke a code interpreter — MCP defines the client-server contract for that. The architecture is fundamentally vertical: one agent, many tools.
The agent coordination layer is where agents reach sideways to other agents. This is the domain of A2A. A planner agent dispatches work to a specialized code-writing agent or a document-summarization agent, tracks task lifecycle, and collects results. The architecture is fundamentally horizontal: many agents, peer relationships.
The interaction layer is where an agent surfaces state to a user interface — streams intermediate results, renders structured components, handles back-and-forth clarification. This is the domain of newer projects like AG-UI, still early in standardization.
Getting confused about which problem you're solving sends you down the wrong protocol path. An engineer who treats A2A as a replacement for MCP will end up writing agent coordination code for what is essentially a single-agent tool invocation problem.
What MCP Actually Optimizes For
MCP, released by Anthropic in November 2024, reached 97 million monthly SDK downloads by early 2026 — a number that tells you something about how badly the ecosystem needed a tool integration standard.
The core bet MCP makes is that tool access control matters more than agent identity. Its trust model is centralized: the host application (the thing running the LLM) is the authority. When an agent wants to call a tool, the host mediates that call, enforces user consent, and controls what data the server can access. Servers are trusted only to the extent the host grants them.
This design works extremely well for single-agent setups — a coding assistant that reads files and executes shell commands, a customer service bot that queries a CRM. It works less well when multiple agents need to share a tool server, because the trust model doesn't cleanly handle the question of which agent's permissions apply. Some teams work around this with per-agent server instances, which is functional but operationally heavy.
MCP's transport stack (JSON-RPC 2.0 over stdio, HTTP, or SSE) is deliberately simple. The November 2025 spec update added asynchronous operations and stateless server support, which extended its reach to higher-latency tool backends. The tradeoff is that MCP has no concept of task lifecycle — it handles discrete calls, not long-running delegated work.
What A2A Actually Optimizes For
Google launched A2A in April 2025 with 50+ partner organizations and reached v1.0 in early 2026, now governed by the Linux Foundation under the Agentic AI Foundation alongside MCP.
The core bet A2A makes is that agent identity matters more than tool access control. Its trust model is distributed: agents advertise their capabilities via JSON-based "Agent Cards," negotiate what protocols they support, and authenticate with each other without relying on a central host. This is peer-to-peer identity, not client-server authorization.
A2A is designed for scenarios where you can't see inside the other agent. When your planner dispatches a task to a third-party agent at a different company, you don't know what framework that agent runs on, what tools it has, or how it handles errors internally. A2A handles this opacity by design — it specifies task lifecycle states (submitted, working, completed, failed), capability discovery, and user experience negotiation, all without requiring the remote agent to expose internals.
IBM launched a competing REST-based protocol called ACP in March 2025, which merged into A2A under the Linux Foundation in August 2025. This merger is worth noting because it shows the industry converging toward A2A as the canonical answer for the horizontal coordination layer — the fragmentation that seemed inevitable in early 2025 didn't materialize at the protocol level.
OpenAPI: The Lingua Franca That Predates Both
OpenAPI (now at v3.2.0) was not designed for agents, but it has become load-bearing infrastructure for the protocol stack in ways neither A2A nor MCP anticipated.
MCP server implementations frequently use OpenAPI specs to describe what tools are available. A2A Agent Cards reference OpenAPI security schemes for authentication. Agents discover and invoke capabilities by parsing OpenAPI specs at runtime — the spec becomes the interface contract between natural language reasoning and structured software.
The practical implication is that if you're building a tool server that you want to expose to agents, writing an accurate OpenAPI spec is not optional. It's the lowest-common-denominator description layer that both MCP and A2A can consume. A poorly written OpenAPI spec — vague descriptions, missing parameter documentation, inconsistent error schemas — translates directly into agent failure modes that are extremely difficult to diagnose because the failure looks like model confusion rather than documentation rot.
The Portability-vs-Capability Tradeoff
Here is the architectural tension that protocol-agnostic design must resolve: the more protocol-neutral your agent interface, the more you give up framework-specific capabilities that could otherwise make your agent significantly more capable.
LangGraph's streaming checkpointing, CrewAI's role assignment primitives, Semantic Kernel's planner abstractions — none of these are expressible through A2A or MCP. If you need them, you need to write framework-specific code. The question is whether the portability benefit justifies the capability cost.
The answer depends on your deployment topology:
-
Single framework, internal deployment: Portability isn't worth the capability cost. Use your framework's native primitives. Adopt MCP for tool integration because the ROI is high and the capability cost is low.
-
Multi-framework, internal deployment: Adopt MCP broadly. Consider A2A at the boundaries between teams running different frameworks. Don't try to make everything protocol-neutral — only the handoffs.
-
Cross-organization deployment: A2A is necessary at the integration boundary. Your internal implementation can use any framework. The protocol-agnostic requirement applies to the surface you expose to partners, not your entire stack.
-
Regulated industries: Both protocols are now under Linux Foundation governance, which matters for procurement and compliance. This significantly reduces the risk of backing a protocol that gets abandoned.
The worst-case pattern is designing your entire internal agent architecture around protocol-agnostic interfaces — treating A2A as an internal design pattern rather than a boundary protocol. This produces maximum portability overhead for minimum portability benefit, since internal code can always be refactored when you switch frameworks.
How to Design Your Agent Interface Layer Today
The practical architecture that has emerged from production deployments layers the protocols rather than choosing between them.
At the tool layer: Deploy MCP servers for every external data source and API your agents need. MCP has the broadest support — every major AI provider (Anthropic, OpenAI, Google, Microsoft, Amazon) adopted it by early 2026, and the 75+ official connectors in the Claude directory mean you're often starting from working code rather than scratch. Design your MCP servers with clean OpenAPI specs and explicit consent boundaries.
At the coordination layer: Introduce A2A only when you have at least two distinct agents that need to collaborate, especially across organizational or framework boundaries. The overhead of A2A for single-agent or tightly coupled multi-agent setups is real — Agent Cards add discovery complexity that a direct function call doesn't have. The crossover point is typically when you need to delegate tasks to agents you don't own or when task lifecycle tracking becomes a reliability requirement.
At the interface boundary: Think of the protocols as contract specifications, not implementation frameworks. Your internal agent code can use whatever framework gives you the best capabilities. The protocol defines the surface you expose. When you upgrade your internal implementation, the protocol surface doesn't have to change.
On streaming: Both MCP and A2A support SSE for streaming. If you're building anything that needs to show incremental progress to a user, design your protocol surface to support streaming from the start. Retrofitting streaming onto synchronous protocol surfaces is painful.
What Comes Next
The formation of the Agentic AI Foundation (AAIF) under the Linux Foundation in December 2025 — co-founded by Anthropic, OpenAI, and Block, with Google, Microsoft, AWS, Cloudflare, and Bloomberg as platinum members — signals that the protocol fragmentation problem is largely solved at the specification level. The industry converged on A2A and MCP faster than most predicted.
The remaining fragmentation is at the implementation layer. Two systems both claiming to support MCP may implement different subsets of the spec, handle errors differently, have inconsistent capability negotiation behavior, and fail to interoperate in ways that only surface under load. The spec is stable; the implementations aren't. This is where most practical protocol pain lives in 2026, and it's where the engineering work is.
The emerging AG-UI project, which standardizes how agents communicate state to user interfaces, is the most likely area of new specification work. If MCP handles vertical tool integration and A2A handles horizontal agent coordination, AG-UI handles the third layer — how a human stays in the loop with a running agent. That layer has no stable standard yet, and the absence is visible in every production agent deployment that has users.
Protocol-agnostic design is worth the investment for the interface boundaries you expose to the outside world. For everything inside, use the framework that makes your agents most capable — and treat the protocol as the translation layer that makes it portable.
- https://a2a-protocol.org/latest/
- https://modelcontextprotocol.io/specification/2025-11-25
- https://www.truefoundry.com/blog/mcp-vs-a2a
- https://auth0.com/blog/mcp-vs-a2a/
- https://heidloff.net/article/mcp-acp-a2a-agent-protocols/
- https://onereach.ai/blog/guide-choosing-mcp-vs-a2a-protocols/
- https://dev.to/jubinsoni/the-agent-protocol-stack-mcp-vs-a2a-vs-ag-ui-when-to-use-what-6dn
- https://lfaidata.foundation/communityblog/2025/08/29/acp-joins-forces-with-a2a-under-the-linux-foundations-lf-ai-data/
- https://www.linuxfoundation.org/press/linux-foundation-announces-the-formation-of-the-agentic-ai-foundation
- https://getstream.io/blog/ai-agent-protocols/
- https://thenewstack.io/why-the-model-context-protocol-won/
- https://www.digitalocean.com/community/tutorials/a2a-vs-mcp-ai-agent-protocols
