A2A vs MCP vs ACP — Complete Protocol Comparison (2026)

Three protocols have emerged to define how AI agents talk to tools, to services, and to each other. They're not competing — but most teams confuse them. This is the complete breakdown: what each one does, how it works technically, and how to choose.

Quick Answer #

TL;DR — Three protocols, three layers

MCP Connects an LLM to tools, APIs, and data sources. Vertical layer. Created by Anthropic (Nov 2024). The standard for model-to-tool connectivity.
A2A Connects autonomous agents to each other as peers. Horizontal layer. Created by Google (Apr 2025). Backed by 50+ enterprise partners.
ACP Connects agents to agents via pure REST. Horizontal layer. Created by IBM/BeeAI (2025). Simpler design, smaller ecosystem.

The key insight: MCP and A2A solve different problems at different layers of the stack and are designed to be used together. ACP is a conceptually similar alternative to A2A, most relevant within the IBM BeeAI ecosystem.

The Problem They're Solving #

As AI systems grow from single models into networks of specialized agents, a critical infrastructure question emerges: how do these agents communicate?

A single agent needs to connect to external tools — file systems, APIs, databases, search engines. That's one problem. A fleet of specialized agents needs to discover each other, delegate tasks, share context, and return results — reliably, across organizations and vendor boundaries. That's a different problem entirely. These two problems require different protocol designs.

Three protocols have stepped up to address this. They were created by different organizations, for different layers of the stack, and they complement each other far more than they compete.

Deep Dive: MCP (Model Context Protocol) #

MCP by Anthropic · Nov 2024 · Open standard

Model Context Protocol

MCP connects an LLM to external tools, data sources, and resources. Think of it as the bridge between a model and the world it can act on — file systems, APIs, databases, and beyond.

  • Client-server architecture (host app → MCP client → MCP server → tool)
  • Three primitives: Tools, Resources, and Prompts
  • Transport: stdio (local) or HTTP + SSE (remote)
  • Direction: vertical — model gains capabilities from the world
  • Widely adopted: Claude, VS Code, Cursor, Zed, Sourcegraph, JetBrains

How MCP works

The host application (e.g., Claude Desktop or Cursor) embeds an MCP client. When the user makes a request, the client queries registered MCP servers to discover available tools. The LLM decides which tool to invoke, the client sends the call to the corresponding MCP server, the server executes the operation, and returns a structured result. This entire flow is controlled within the host's trust boundary — users explicitly choose which MCP servers to connect.

The three core primitives are: Tools (callable functions like run_sql, fetch_url, write_file), Resources (readable data like documents, schemas, configs that the model can use as context), and Prompts (reusable, server-defined prompt templates). The 2024 spec added Sampling, allowing MCP servers to request LLM completions — turning the protocol bidirectional.

MCP has the largest ecosystem of any agent protocol today. Thousands of community-built MCP servers cover Slack, GitHub, Postgres, Notion, Google Drive, Stripe, and virtually every major API surface.

Deep Dive: A2A (Agent-to-Agent Protocol) #

A2A by Google · Apr 2025 · Open protocol

Agent-to-Agent Protocol

A2A defines how autonomous AI agents communicate with each other as peers. One agent can discover another, delegate a task, and receive results — without knowing the internal implementation of the other agent.

  • Built on HTTP + JSON + Server-Sent Events (SSE)
  • Agent Cards for decentralized capability discovery
  • Task lifecycle: submitted → working → completed (or failed, cancelled)
  • Direction: horizontal — agents coordinate as peers across boundaries
  • Launch partners: Salesforce, SAP, Workday, ServiceNow, Deloitte, KPMG, and 40+ more

How A2A works

Every A2A-compliant agent publishes an Agent Card — a JSON document at the well-known URL /.well-known/agent.json. This card describes the agent's name, capabilities (called skills), supported input/output formats, and the HTTP endpoints to use for communication. No central registry is required; discovery is decentralized by design.

An orchestrator agent fetches a target agent's card, then sends a Task — a structured JSON request describing what work to perform. Long-running tasks stream intermediate updates back via Server-Sent Events. When complete, the agent returns Artifacts — the structured output of the task. For asynchronous workflows, agents can deliver results via push notifications (webhooks) rather than requiring the caller to stay connected.

A2A is explicitly designed to be framework-agnostic and vendor-neutral. An agent built with LangChain can communicate with one built on AutoGen or a proprietary enterprise platform, as long as both implement the A2A spec. This was the gap MCP does not fill — MCP governs a model's relationship with its tools, not how separate agent systems talk to each other.

Deep Dive: ACP (Agent Communication Protocol) #

ACP by IBM / BeeAI · 2025 · Open protocol

Agent Communication Protocol

ACP is a REST-based protocol focused on agent interoperability, primarily within the BeeAI framework. It emphasizes simplicity: pure REST, no custom transports, local-first design.

  • REST/HTTP native — no SSE required
  • Supports synchronous (/runs) and asynchronous (/runs/async) invocation
  • Registry-based agent discovery
  • Direction: horizontal — similar scope to A2A
  • Ecosystem: IBM enterprise, BeeAI framework; earlier adoption stage than A2A

How ACP works

Agents register with a local or federated registry, exposing their capabilities. Callers discover agents through this registry and invoke them via standard HTTP. Synchronous invocations use POST /runs and receive the result immediately. Asynchronous invocations use POST /runs/async, returning a run ID that the caller can use to poll GET /runs/{id}/status for results.

The design philosophy is intentional simplicity: if you can make a standard HTTP request, you can use ACP. There are no special transports, no Agent Cards to publish, and no SSE infrastructure to maintain. This makes ACP easier to integrate with existing REST API tooling — but means discovery is registry-dependent rather than decentralized. ACP is currently most mature within the IBM BeeAI open-source ecosystem and is less widely adopted outside of it.

Side-by-Side Comparison #

Property MCP A2A ACP
Created by Anthropic Google IBM / BeeAI
Released Nov 2024 Apr 2025 2025
Primary purpose LLM ↔ Tools / Data Agent ↔ Agent Agent ↔ Agent
Communication direction Vertical Horizontal Horizontal
Transport stdio / HTTP+SSE HTTP + SSE REST / HTTP
Discovery mechanism None built-in Agent Cards (decentralized) Registry-based
Streaming Via SSE (remote) Yes (SSE) Polling-based
Long-running tasks Limited Yes (full lifecycle) Yes (async pattern)
Auth model OAuth 2.0 / host-controlled Enterprise auth (pluggable) Standard HTTP auth
Spec maturity Stable (v1.x) Active development Early
Ecosystem / adoption Very wide — thousands of servers Growing fast — 50+ launch partners Early stage — BeeAI-centric
Best for Tool & data integration Enterprise multi-agent systems IBM / BeeAI stack

How They Fit Together #

The most important architectural insight: MCP and A2A are not alternatives — they operate at different layers of the same system. A production multi-agent deployment typically uses both.

┌──────────────────────────────────────┐
 Orchestrator Agent
  Decides what to delegate, to whom
└────────────┬─────────────────────────┘
            A2A  ← agent-to-agent layer (horizontal)
┌────────────┴─────────────────────────┐
 Specialist Agent  (e.g., "Research Agent")
  Executes a focused, scoped task
└────────────┬─────────────────────────┘
            MCP  ← model-to-tool layer (vertical)
┌────────────┴─────────────────────────┐
 Tools   Web search · Database · File system · APIs
└──────────────────────────────────────┘

A concrete example: a user asks an orchestrator agent to "research recent AI governance regulations and draft a summary." The orchestrator uses A2A to delegate to a specialized research agent. That research agent, internally, uses MCP to call a web search tool, fetch a PDF, and query a vector database. It returns an Artifact to the orchestrator via A2A. The orchestrator then uses MCP to write the final document to the user's file system.

ACP occupies the same horizontal layer as A2A. In practice, a system would use one or the other for agent coordination, not both. ACP is the appropriate choice specifically within IBM's BeeAI ecosystem; A2A is the better default elsewhere, given its broader industry adoption.

The practical architecture: Use MCP for every agent's internal tool connectivity. Use A2A for all agent-to-agent coordination. These two choices cover the full stack for most multi-agent systems.

Choosing the Right Protocol #

Use this when

MCP

Your AI model needs to call external tools, read files, query a database, or interact with any API. MCP is the de facto standard for model-to-tool connectivity with the largest available ecosystem of pre-built servers.

Use this when

A2A

You're orchestrating multiple specialized agents that need to coordinate across service or organizational boundaries. A2A's Agent Cards enable decentralized discovery across heterogeneous frameworks (LangChain, AutoGen, proprietary systems).

Use this when

ACP

Your team is working within the IBM BeeAI ecosystem, or you need a pure REST design without SSE infrastructure. ACP offers simpler integration for straightforward agent coordination where decentralized discovery is not required.

Use this when

MCP + A2A

You're building a production multi-agent system — this is the emerging standard architecture. MCP handles each agent's tool connectivity internally; A2A handles all inter-agent coordination. Most enterprise deployments will converge on this combination.

If you're starting a new project today: implement MCP immediately for tool access, then add A2A when your architecture grows to include more than one agent. ACP is worth evaluating only if you have a specific dependency on the BeeAI framework or IBM's enterprise stack.

Why This Matters Beyond the Technical #

Protocol standardization is how ecosystems consolidate. The organization whose protocol becomes the default gains structural leverage — every tool, every agent, every enterprise deployment flows through their abstraction layer. This is not a technical curiosity; it is a question of platform power.

We are watching the same dynamics that played out with HTTP, SQL, and REST — but compressed from decades into years. The decisions being made now about which protocols to adopt will shape the architecture of AI systems for the next decade. Teams that understand the protocol stack will design systems that remain interoperable as the ecosystem evolves. Teams that don't will build brittle, vendor-locked integrations that break when the landscape shifts.

This is why protocol literacy matters — not just for engineers making implementation decisions, but for anyone responsible for AI infrastructure strategy.

Frequently Asked Questions #

What is the difference between A2A and MCP?

MCP connects a single LLM to external tools and data sources — it is a vertical protocol that gives a model capabilities. A2A connects autonomous AI agents to each other as peers — it is a horizontal protocol for task delegation between agents across organizational or framework boundaries. They solve different problems: MCP handles what a model can do; A2A handles how separate agent systems collaborate. They are designed to be used together in the same architecture.

Is A2A replacing MCP?

No. A2A and MCP address fundamentally different problems and are not competing. Google explicitly designed A2A to be complementary to MCP, and Anthropic is listed as one of A2A's launch partners while continuing to develop MCP independently. In the canonical multi-agent architecture, each agent uses MCP internally for tool access, and A2A governs all inter-agent communication. Neither protocol makes the other redundant.

Can A2A and MCP be used in the same system?

Yes — and this is the recommended pattern. A2A handles agent-to-agent coordination at the orchestration layer. Each individual agent uses MCP internally to connect to the tools and data it needs to complete its work. The two protocols operate at distinct layers and do not conflict. A typical flow: orchestrator delegates via A2A → specialist agent receives task → specialist calls tools via MCP → returns Artifact to orchestrator via A2A.

What is an Agent Card in A2A?

An Agent Card is a JSON document that an A2A-compliant agent publishes at the well-known URL /.well-known/agent.json on its HTTP server. It describes the agent's name, description, skills (capabilities), supported input and output formats, and the endpoints other agents should use to send tasks. Agent Cards enable decentralized agent discovery — no central registry is required. Any agent can find and communicate with another simply by fetching its Agent Card.

What transports does MCP support?

MCP supports two transports. stdio is used for local processes: the host application and MCP server communicate via standard input/output, suitable for running MCP servers as local processes. HTTP + SSE is used for remote servers: the client sends requests over HTTP and receives streamed responses via Server-Sent Events. Remote MCP servers are appropriate when the tool or data source is hosted as a web service rather than running locally.

How does ACP differ from A2A technically?

Both protocols enable horizontal agent-to-agent communication, but differ in design philosophy. A2A uses HTTP + JSON + SSE and introduces Agent Cards for decentralized, registry-free discovery. ACP is pure REST — no SSE required — with a registry-based discovery model and a polling pattern for async results. A2A has broader industry adoption and enterprise backing. ACP offers a simpler integration surface for teams that find SSE infrastructure burdensome or that work within IBM's BeeAI ecosystem.

Who created MCP, A2A, and ACP?

MCP was created by Anthropic, released as an open standard in November 2024. A2A was created by Google, released as an open protocol in April 2025 with over 50 technology partners including Salesforce, SAP, Workday, and ServiceNow. ACP was created by IBM as part of the BeeAI open-source framework in 2025. All three are open protocols with public specifications and open-source SDKs available.

Which AI agent protocol will become the standard?

MCP has already achieved de facto standard status for model-to-tool connectivity — its ecosystem is vastly larger than any alternative. In the horizontal agent-to-agent layer, A2A has significantly stronger industry momentum than ACP, backed by Google's infrastructure and 50+ enterprise partners at launch. The most likely outcome is MCP + A2A emerging as the canonical two-layer stack for multi-agent systems, with ACP persisting as a relevant option within the IBM/BeeAI ecosystem. This mirrors how HTTP became universal while SOAP remained relevant in specific enterprise contexts.

Does A2A require a central registry?

No. A2A uses a decentralized discovery model based on Agent Cards — each agent publishes its own capabilities at /.well-known/agent.json. An orchestrator discovers an agent simply by fetching that URL. No central directory or registry is needed. This is a deliberate design choice that makes A2A resilient to single points of failure and easier to deploy across organizational boundaries where a shared registry would require coordination. ACP, by contrast, uses a registry-based model.

Is MCP safe to use in production?

MCP is designed with security in mind: the host application controls which MCP servers are registered, users must explicitly authorize connections, and OAuth 2.0 is supported for authentication with external services. The main security consideration is supply chain risk — as with any third-party integration, community-built MCP servers should be reviewed before use. The MCP specification includes guidance on trust levels and permission scoping for both local and remote servers.

Further Reading

A2A、MCP、ACP 完整对比(2026)

三个协议正在定义 AI 智能体如何与工具、服务以及彼此通信。它们并非竞争关系——但大多数团队都把它们混淆了。这是完整解析:各自做什么、技术上如何工作、以及如何选择。

快速结论 #

TL;DR — 三个协议,三个层级

MCP 将 LLM 连接到工具、API 和数据源。纵向层。Anthropic 创建(2024年11月)。模型-工具连接的事实标准。
A2A 将自主智能体以对等方式相互连接。横向层。Google 创建(2025年4月)。50+ 家企业合作伙伴支持。
ACP 通过纯 REST 连接智能体间通信。横向层。IBM/BeeAI 创建(2025年)。设计更简单,生态较小。

核心洞察:MCP 和 A2A 解决的是协议栈不同层级的不同问题,被设计为配合使用。ACP 与 A2A 定位相近,主要适用于 IBM BeeAI 生态。

它们在解决什么问题 #

随着 AI 系统从单一模型演进为专业化智能体的网络,一个关键基础设施问题浮现:这些智能体之间如何通信?

单个智能体需要连接外部工具——文件系统、API、数据库、搜索引擎。这是一个问题。一组专业化智能体需要相互发现、委托任务、共享上下文、返回结果——跨组织、跨供应商边界,可靠地运行。这是完全不同的另一个问题。这两个问题需要不同的协议设计。

三个协议先后出现来回答这些问题。它们由不同组织设计,面向协议栈的不同层级,彼此互补远多于竞争。

深入:MCP(模型上下文协议) #

MCP Anthropic · 2024年11月 · 开放标准

Model Context Protocol(模型上下文协议)

MCP 将 LLM 连接到外部工具、数据源和资源,是模型与它能操作的世界之间的桥梁——文件系统、API、数据库等。

  • 客户端-服务器架构(宿主应用 → MCP 客户端 → MCP 服务器 → 工具)
  • 三种原语:工具(Tools)、资源(Resources)、提示词(Prompts)
  • 传输:stdio(本地)或 HTTP+SSE(远程)
  • 方向:纵向——模型从世界中获得能力
  • 广泛采用:Claude、VS Code、Cursor、Zed、Sourcegraph、JetBrains

工作原理

宿主应用(如 Claude Desktop 或 Cursor)内嵌一个 MCP 客户端。当用户发出请求时,客户端查询已注册的 MCP 服务器以发现可用工具。LLM 决定调用哪个工具,客户端将调用发送给对应的 MCP 服务器,服务器执行操作并返回结构化结果。整个流程在宿主的信任边界内受控——用户明确选择连接哪些 MCP 服务器。

三种核心原语:工具(Tools)(可调用的函数,如 run_sqlfetch_url);资源(Resources)(模型可读的数据,如文档、数据库 schema);提示词(Prompts)(服务器定义的可复用提示模板)。MCP 拥有当前所有智能体协议中最大的生态——数千个社区构建的 MCP 服务器覆盖了 Slack、GitHub、Postgres、Notion、Google Drive 等几乎所有主流 API。

深入:A2A(智能体间协议) #

A2A Google · 2025年4月 · 开放协议

Agent-to-Agent Protocol(智能体间协议)

A2A 定义自主 AI 智能体如何以对等方式相互通信。一个智能体可以发现另一个、委托任务、接收结果——无需了解对方的内部实现。

  • 基于 HTTP + JSON + 服务器推送事件(SSE)
  • Agent Cards 实现去中心化能力发现
  • 任务生命周期:已提交 → 处理中 → 已完成(或失败、已取消)
  • 方向:横向——智能体以对等方式跨边界协作
  • 发布合作伙伴:Salesforce、SAP、Workday、ServiceNow、Deloitte、KPMG 等 50+ 家

工作原理

每个 A2A 兼容智能体在 /.well-known/agent.json 发布一张 Agent Card——描述智能体名称、能力(称为 Skills)、支持的输入输出格式和通信端点的 JSON 文档。无需中央注册表;发现机制天生去中心化。

编排智能体获取目标智能体的 Agent Card,然后发送一个任务(Task)——描述需要执行的工作的结构化 JSON 请求。长时运行的任务通过 SSE 流式回传中间更新。完成时,智能体以 Artifact 的形式返回结构化输出。对于异步工作流,智能体可通过 Webhook 推送通知交付结果,无需调用方保持连接。

A2A 被明确设计为框架无关、供应商中立。基于 LangChain 构建的智能体可以与基于 AutoGen 或专有企业平台构建的智能体通信,前提是双方都实现 A2A 规范。这正是 MCP 不填补的空缺——MCP 管理的是模型与工具的关系,而非独立智能体系统之间的通信。

深入:ACP(智能体通信协议) #

ACP IBM / BeeAI · 2025年 · 开放协议

Agent Communication Protocol(智能体通信协议)

ACP 是以 REST 为基础的协议,专注于智能体互操作性,主要在 BeeAI 框架内使用。设计哲学是极简:纯 REST,无自定义传输,本地优先。

  • REST/HTTP 原生——无需 SSE
  • 支持同步(/runs)和异步(/runs/async)调用
  • 基于注册表的智能体发现
  • 方向:横向——与 A2A 定位相近
  • 生态:IBM 企业、BeeAI 框架;采用阶段早于 A2A

工作原理

智能体向本地或联邦注册表注册,暴露其能力。调用方通过注册表发现智能体,并通过标准 HTTP 调用它们。同步调用使用 POST /runs 并立即获得结果。异步调用使用 POST /runs/async,返回一个运行 ID,调用方通过 GET /runs/{id}/status 轮询获取结果。

设计理念是刻意的简洁:只要能发出标准 HTTP 请求,就能使用 ACP。没有特殊传输层,不需要发布 Agent Cards,不需要维护 SSE 基础设施。这使 ACP 更容易与现有 REST API 工具集成——但发现机制依赖注册表而非去中心化。ACP 目前在 IBM BeeAI 开源生态中最为成熟,在此之外的采用仍处于早期阶段。

横向对比 #

属性 MCP A2A ACP
创建者 Anthropic Google IBM / BeeAI
发布时间 2024年11月 2025年4月 2025年
主要用途 LLM ↔ 工具/数据 智能体 ↔ 智能体 智能体 ↔ 智能体
通信方向 纵向 横向 横向
传输层 stdio / HTTP+SSE HTTP + SSE REST / HTTP
发现机制 无内置 Agent Cards(去中心化) 注册表
流式传输 SSE(远程) 支持(SSE) 轮询
长时任务 有限 支持(完整生命周期) 支持(异步模式)
认证模型 OAuth 2.0 企业认证(可插拔) 标准 HTTP 认证
规范成熟度 稳定(v1.x) 活跃开发中 早期
生态/采用情况 非常广泛——数千个服务器 快速增长——50+ 发布合作伙伴 早期阶段——以 BeeAI 为中心
适合场景 工具与数据集成 企业多智能体系统 IBM / BeeAI 技术栈

它们如何协同工作 #

最重要的架构洞察:MCP 和 A2A 并非二选一——它们在同一个系统的不同层级运行。生产级多智能体部署通常同时使用两者。

┌──────────────────────────────────────┐
 编排智能体
  决定委托什么、委托给谁
└────────────┬─────────────────────────┘
            A2A  ← 智能体间层(横向)
┌────────────┴─────────────────────────┐
 专业智能体  (如"研究智能体")
  执行专注的、有边界的任务
└────────────┬─────────────────────────┘
            MCP  ← 模型-工具层(纵向)
┌────────────┴─────────────────────────┐
 工具   网络搜索 · 数据库 · 文件系统 · API
└──────────────────────────────────────┘

一个具体示例:用户请求编排智能体"研究近期 AI 治理法规并起草摘要"。编排智能体通过 A2A 将任务委托给专业研究智能体。研究智能体在内部通过 MCP 调用网络搜索工具、抓取 PDF、查询向量数据库。它通过 A2A 向编排智能体返回一个 Artifact。编排智能体再通过 MCP 将最终文档写入用户的文件系统。

ACP 与 A2A 占据相同的横向层级。在实践中,一个系统会选择其中一个用于智能体协调,而非同时使用两者。如果你在 IBM BeeAI 生态中,ACP 是合适的选择;否则,鉴于更广泛的行业采用,A2A 是更好的默认选项。

实践架构建议:为每个智能体的内部工具连接使用 MCP,为所有智能体间协调使用 A2A。这两个选择覆盖了大多数多智能体系统的完整协议栈。

如何选择 #

适合场景

MCP

你的 AI 模型需要调用外部工具、读取文件、查询数据库或与 API 交互。MCP 是模型-工具连接的事实标准,拥有最大的预构建服务器生态。

适合场景

A2A

你在编排多个专业智能体,需要跨服务或组织边界协调。Agent Cards 支持跨异构框架(LangChain、AutoGen、专有系统)的去中心化发现。

适合场景

ACP

你的团队在 IBM BeeAI 生态中工作,或者需要纯 REST 设计而不想维护 SSE 基础设施。ACP 为无需去中心化发现的简单智能体协调提供更简单的集成界面。

适合场景

MCP + A2A

你在构建生产级多智能体系统——这是正在出现的标准架构。MCP 处理每个智能体的内部工具连接;A2A 处理所有智能体间协调。大多数企业部署将收敛于这个组合。

为什么这超越了技术本身 #

协议标准化是生态系统整合的方式。谁的协议成为默认标准,谁就获得结构性杠杆——每一个工具、每一个智能体、每一次企业部署都流经它们的抽象层。这不是技术细节,而是平台权力的问题。

我们正在见证与 HTTP、SQL、REST 相同的动态——但从数十年压缩到了数年。当下关于采用哪个协议的决定,将塑造未来十年 AI 系统的架构。理解协议栈的团队将构建出随生态系统演进保持互操作性的系统;不理解的团队将构建脆弱的、供应商锁定的集成,在格局改变时全面失效。

这就是为什么协议素养很重要——不只是对做实现决策的工程师,而是对任何负责 AI 基础设施战略的人。

常见问题 #

A2A 和 MCP 有什么区别?

MCP 将单个 LLM 连接到外部工具和数据源——是一个纵向协议,为模型赋予能力。A2A 将自主 AI 智能体以对等方式相互连接——是一个横向协议,用于跨组织或框架边界的任务委托。它们解决不同的问题:MCP 处理模型能做什么;A2A 处理独立智能体系统如何协作。两者被设计为在同一架构中配合使用。

A2A 会取代 MCP 吗?

不会。A2A 和 MCP 解决的是根本不同的问题,并非竞争关系。Google 明确将 A2A 设计为 MCP 的补充,Anthropic 是 A2A 的发布合作伙伴之一,同时继续独立开发 MCP。在典型的多智能体架构中,每个智能体内部使用 MCP 进行工具访问,A2A 管理所有智能体间通信。两个协议互不使对方多余。

A2A 中的 Agent Card 是什么?

Agent Card 是 A2A 兼容智能体发布在其 HTTP 服务器上 /.well-known/agent.json 地址的 JSON 文档。它描述了智能体的名称、描述、能力(称为 Skills)、支持的输入输出格式以及其他智能体发送任务应使用的端点。Agent Cards 实现了去中心化的智能体发现——无需中央注册表。任何智能体只需获取目标智能体的 Agent Card 就能与之通信。

ACP 和 A2A 在技术上有什么不同?

两者都支持横向智能体间通信,但设计理念不同。A2A 使用 HTTP + JSON + SSE,通过 Agent Cards 实现去中心化(无注册表)发现。ACP 是纯 REST,发现机制依赖注册表,异步结果通过 HTTP 轮询获取。A2A 有更广泛的行业采用和企业支持;ACP 对于不需要 SSE 基础设施或在 IBM BeeAI 生态中工作的团队提供更简单的集成界面。

MCP、A2A 和 ACP 分别由谁创建?

MCP 由 Anthropic 创建,于 2024 年 11 月作为开放标准发布。A2A 由 Google 创建,于 2025 年 4 月作为开放协议发布,发布时有 50+ 家技术合作伙伴,包括 Salesforce、SAP、Workday 和 ServiceNow。ACP 由 IBM 作为 BeeAI 开源框架的一部分于 2025 年创建。三者均为开放协议,有公开规范和开源 SDK。

多智能体 AI 系统应该用哪个协议?

对于大多数多智能体系统,你需要同时使用 MCP 和 A2A。使用 MCP 为每个智能体连接它所需的工具和数据源;使用 A2A 协调智能体间的工作——任务委托、任务传递和结果流式传输。如果你在 IBM BeeAI 生态中工作,可以评估 ACP 作为横向协调层的替代方案。如果没有特定的 IBM/BeeAI 依赖,A2A 是更好的默认选择,因为它有更广泛的行业支持。

延伸阅读