Skip to content

Commit e685daf

Browse files
split proxy + MCP-over-ACP RFDs (#364)
* docs(rfd): Split MCP-over-ACP into separate RFD Extract MCP transport mechanism from proxy-chains RFD into standalone mcp-over-acp.mdx. This separation reflects that MCP-over-ACP is useful for any ACP component, not just proxies. New mcp-over-acp.mdx covers: - ACP as MCP transport type (transport: acp with UUID) - Capability advertising via mcpCapabilities.acp - mcp/connect, mcp/message, mcp/disconnect protocol - Bridging for agents without native support proxy-chains.mdx now references mcp-over-acp for MCP-specific details and focuses on the proxy architecture and conductor design. * docs(rfd): Refine MCP-over-ACP RFD - Rewrite status quo to clarify ACP/MCP relationship (front/behind framing) - Add capability advertisement flow to 'How it works' section - Add 'Bridging and compatibility' subsection early in the doc - Rename uuid field to id (no format restriction) - Rename acpUrl to acpId (simpler, no acp: prefix) - Link mcpCapabilities to schema docs * docs(rfd): Refine proxy-chains RFD - Rename to 'Agent Extensions via ACP Proxies' - Lead with agent extensions concept, proxies as mechanism - Tighten status quo section (~50% shorter) - Move MCP-over-ACP content to FAQ with bridging explanation * docs(rfd): Clarify component roles and conductor in proxy-chains - Add diagrams showing proxy chain structure and conductor abstraction - Define terminal vs non-terminal roles upfront - Explain conductor's role and reference canonical Rust implementation - Minor wording improvements throughout * docs: Add mcp-over-acp RFD to navigation * format --------- Co-authored-by: Ben Brandt <[email protected]>
1 parent 0120e43 commit e685daf

File tree

3 files changed

+402
-245
lines changed

3 files changed

+402
-245
lines changed

docs/docs.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -113,6 +113,7 @@
113113
"rfds/session-info-update",
114114
"rfds/agent-telemetry-export",
115115
"rfds/proxy-chains",
116+
"rfds/mcp-over-acp",
116117
"rfds/session-usage",
117118
"rfds/acp-agent-registry"
118119
]

docs/rfds/mcp-over-acp.mdx

Lines changed: 281 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,281 @@
1+
---
2+
title: "MCP-over-ACP: MCP Transport via ACP Channels"
3+
---
4+
5+
Author(s): [nikomatsakis](https://github.com/nikomatsakis)
6+
7+
## Elevator pitch
8+
9+
> What are you proposing to change?
10+
11+
Add support for MCP servers that communicate over ACP channels instead of stdio or HTTP. This enables any ACP component to provide MCP tools and handle callbacks through the existing ACP connection, without spawning separate processes or managing additional transports.
12+
13+
## Status quo
14+
15+
> How do things work today and what problems does this cause? Why would we change things?
16+
17+
ACP and MCP each solve different halves of the problem of interacting with an agent. ACP stands in "front" of the agent, managing sessions, sending prompts, and receiving responses. MCP stands "behind" the agent, providing tools that the agent can use to do its work.
18+
19+
Many applications would benefit from being able to be both "in front" of the agent and "behind" it. This would allow a client, for example, to create custom MCP tools that are tailored to a specific request and which live in the client's address space.
20+
21+
The only way to combine ACP and MCP today is to use some sort of "backdoor", such as opening an HTTP port for the agent to connect to or providing a binary that communicates with IPC. This is inconvenient to implement but also means that clients cannot be properly abstracted and sandboxed, as some of the communication with the agent is going through side channels. Imagine trying to host an ACP component (client, agent, or [agent extension](./proxy-chains.mdx)) that runs in a WASM sandbox or even on another machine: for that to work, the ACP protocol has to encompass all of the relevant interactions so that messages can be transmitted properly.
22+
23+
## What we propose to do about it
24+
25+
> What are you proposing to improve the situation?
26+
27+
We propose adding `"acp"` as a new MCP transport type. When an ACP component (client or proxy) adds an MCP server with ACP transport to a session, tool invocations for that server are routed back through the ACP channel to the component that provided it.
28+
29+
This enables patterns like:
30+
31+
- A **client** that injects project-aware tools into every session and handles callbacks directly
32+
- An **[agent extension](./proxy-chains.mdx)** that adds context-aware tools based on the conversation state
33+
- A **bridge** that translates ACP-transport MCP servers to stdio for agents that don't support native ACP transport
34+
35+
### How it works
36+
37+
When the client connects, the agent advertises MCP-over-ACP support via `mcpCapabilities.acp` in its `InitializeResponse`. If supported, the client can add MCP servers to a `session/new` request with `"transport": "acp"` and an `id` that identifies the server:
38+
39+
```json
40+
{
41+
"tools": {
42+
"mcpServers": {
43+
"project-tools": {
44+
"transport": "acp",
45+
"id": "550e8400-e29b-41d4-a716-446655440000"
46+
}
47+
}
48+
}
49+
}
50+
```
51+
52+
The `id` is generated by the component providing the MCP server.
53+
54+
When the agent connects to the MCP server, an `mcp/connect` message is sent with the MCP server's `id`. This returns a fresh `connectionId`. MCP messages are then sent back and forth using `mcp/message` requests. Finally, `mcp/disconnect` signals that the connection is closing.
55+
56+
### Bridging and compatibility
57+
58+
Existing agents don't support ACP transport for MCP servers. To bridge this gap, a wrapper component can translate between ACP-transport MCP servers and the stdio/HTTP transports that agents already support. The wrapper spawns shim processes or HTTP servers that the agent connects to normally, then relays messages to/from the ACP channel.
59+
60+
We've implemented this bridging as part of the conductor described in the [Proxy Chains RFD](./proxy-chains). The conductor always advertises `mcpCapabilities.acp: true` to its clients, handling the translation transparently regardless of whether the downstream agent supports native ACP transport.
61+
62+
### Message flow example
63+
64+
```mermaid
65+
sequenceDiagram
66+
participant Client
67+
participant Agent
68+
69+
Client->>Agent: session/new (with ACP-transport MCP server)
70+
Agent-->>Client: session created
71+
72+
Client->>Agent: prompt ("analyze this codebase")
73+
74+
Note over Agent: Agent decides to use the tool
75+
Agent->>Client: mcp/connect (acpId: "<id>")
76+
Client-->>Agent: connectionId: "conn-1"
77+
78+
Agent->>Client: mcp/message (list_files tool call)
79+
Client-->>Agent: file listing results
80+
81+
Agent-->>Client: response using tool results
82+
83+
Agent->>Client: mcp/disconnect (connectionId: "conn-1")
84+
```
85+
86+
## Shiny future
87+
88+
> How will things play out once this feature exists?
89+
90+
### Seamless tool injection
91+
92+
Components can provide tools without any process management. A Rust development environment could inject cargo-aware tools, a cloud IDE could inject deployment tools, and a security scanner could inject vulnerability checking - all through the same ACP connection they're already using.
93+
94+
### WebAssembly-based tooling
95+
96+
Components running in sandboxed environments (like WASM) can provide MCP tools without needing filesystem or process spawning capabilities. The ACP channel is their only interface, and that's sufficient.
97+
98+
### Transparent bridging
99+
100+
For agents that don't natively support ACP transport, intermediaries can transparently bridge: accepting MCP-over-ACP from clients and spawning stdio- or HTTP-based MCP servers that the agent can use normally. This provides backwards compatibility while allowing the ecosystem to adopt ACP transport incrementally.
101+
102+
## Implementation details and plan
103+
104+
> Tell me more about your implementation. What is your detailed implementation plan?
105+
106+
### Capability advertising
107+
108+
Agents advertise MCP-over-ACP support via the [`mcpCapabilities`](/protocol/schema#mcpcapabilities) field in their `InitializeResponse`. We propose adding an `acp` field to this existing structure:
109+
110+
```json
111+
{
112+
"capabilities": {
113+
"mcpCapabilities": {
114+
"http": false,
115+
"sse": false,
116+
"acp": true
117+
}
118+
}
119+
}
120+
```
121+
122+
When `mcpCapabilities.acp` is `true`, the agent can handle MCP servers declared with `"transport": "acp"` natively - it will send `mcp/connect`, `mcp/message`, and `mcp/disconnect` messages through the ACP channel.
123+
124+
Clients don't need to advertise anything - they simply check the agent's capabilities to determine whether bridging is needed.
125+
126+
**Bridging intermediaries**: An intermediary that provides bridging can present `mcpCapabilities.acp: true` to its clients regardless of whether the downstream agent supports it, handling bridging transparently (see [Bridging](#bridging-for-agents-without-native-support) below).
127+
128+
### MCP transport schema extension
129+
130+
We extend the MCP JSON schema to include ACP as a transport option:
131+
132+
```json
133+
{
134+
"type": "object",
135+
"properties": {
136+
"transport": {
137+
"type": "string",
138+
"enum": ["stdio", "http", "acp"]
139+
}
140+
},
141+
"allOf": [
142+
{
143+
"if": { "properties": { "transport": { "const": "acp" } } },
144+
"then": {
145+
"properties": {
146+
"id": {
147+
"type": "string"
148+
}
149+
},
150+
"required": ["id"]
151+
}
152+
}
153+
]
154+
}
155+
```
156+
157+
### Message reference
158+
159+
**Connection lifecycle:**
160+
161+
```json
162+
// Establish MCP connection
163+
{
164+
"method": "mcp/connect",
165+
"params": {
166+
"acpId": "550e8400-e29b-41d4-a716-446655440000",
167+
"meta": { ... }
168+
}
169+
}
170+
// Response:
171+
{
172+
"connectionId": "conn-123",
173+
"meta": { ... }
174+
}
175+
176+
// Close MCP connection
177+
{
178+
"method": "mcp/disconnect",
179+
"params": {
180+
"connectionId": "conn-123",
181+
"meta": { ... }
182+
}
183+
}
184+
```
185+
186+
**MCP message exchange:**
187+
188+
```json
189+
// Send MCP message (bidirectional - works agent→client or client→agent)
190+
{
191+
"method": "mcp/message",
192+
"params": {
193+
"connectionId": "conn-123",
194+
"method": "<MCP_METHOD>",
195+
"params": { ... },
196+
"meta": { ... }
197+
}
198+
}
199+
```
200+
201+
The inner MCP message fields (`method`, `params`) are flattened into the params object. Whether the wrapped message is a request or notification is determined by the presence of an `id` field in the outer JSON-RPC envelope, following JSON-RPC conventions.
202+
203+
### Routing by ID
204+
205+
The `acpId` in `mcp/connect` matches the `id` that was provided by the component when it declared the MCP server in `session/new`. The receiving side uses this `id` to route messages to the correct handler.
206+
207+
When a component provides multiple MCP servers in a single session, each gets a unique `id`, enabling proper message routing.
208+
209+
### Connection multiplexing
210+
211+
Multiple connections to the same MCP server are supported - each `mcp/connect` returns a unique `connectionId`. This allows scenarios where an agent opens multiple concurrent connections to the same tool server.
212+
213+
### Bridging for agents without native support
214+
215+
Not all agents will support MCP-over-ACP natively. To maintain compatibility, it is possible to write a bridge that translates ACP-transport MCP servers to transports the agent does support.
216+
217+
**Bridging approaches:**
218+
219+
- **Stdio shim**: Spawn a small shim process that the agent connects to via stdio. The shim relays MCP messages to/from the ACP channel. This is the most compatible approach since all MCP-capable agents support stdio.
220+
221+
- **HTTP bridge**: Run a local HTTP server that the agent connects to. MCP messages are relayed to/from the ACP channel. This works for agents that prefer HTTP transport.
222+
223+
**How bridging works:**
224+
225+
When a client provides an MCP server with `"transport": "acp"`, and the agent doesn't advertise `mcpCapabilities.acp: true`, a bridge can:
226+
227+
1. Rewrite the MCP server declaration in `session/new` to use stdio or HTTP transport
228+
2. Spawn the appropriate shim process or HTTP server
229+
3. Relay messages between the shim and the ACP channel
230+
231+
From the agent's perspective, it's talking to a normal stdio/HTTP MCP server. From the client's perspective, it's handling MCP-over-ACP messages. The bridge handles the translation transparently.
232+
233+
```mermaid
234+
sequenceDiagram
235+
participant Client
236+
participant Bridge
237+
participant Shim as Stdio Shim
238+
participant Agent
239+
240+
Note over Bridge: Agent doesn't support mcpCapabilities.acp
241+
Client->>Bridge: session/new (MCP server with acp transport)
242+
Bridge->>Agent: session/new (MCP server with stdio transport)
243+
Note over Bridge: Spawns shim for bridging
244+
245+
Agent->>Shim: MCP tool call (stdio)
246+
Shim->>Bridge: relay
247+
Bridge->>Client: mcp/message
248+
Client-->>Bridge: tool result
249+
Bridge-->>Shim: relay
250+
Shim-->>Agent: MCP response (stdio)
251+
```
252+
253+
A first implementation of this bridging exists in the `sacp-conductor` crate, part of the proposed new version of the [ACP Rust SDK](https://github.com/anthropics/rust-sdk).
254+
255+
## Frequently asked questions
256+
257+
> What questions have arisen over the course of authoring this document or during subsequent discussions?
258+
259+
### Why use a separate `id` instead of server names?
260+
261+
Server names in `mcpServers` are chosen by whoever adds them to the session, and could potentially collide if multiple components add servers. A component-generated `id` provides guaranteed uniqueness and allows the providing component to correlate incoming messages back to the correct session context.
262+
263+
This also avoids a potential deadlock: some agents don't return the session ID until after MCP servers have been initialized. Using a component-generated `id` avoids any dependency on agent-provided identifiers.
264+
265+
### How does this relate to proxy chains?
266+
267+
MCP-over-ACP is a transport mechanism that works independently of proxy chains. However, proxy chains are a natural use case: a proxy can inject MCP servers into sessions it forwards, handle the tool callbacks, and use the results to enhance its transformations.
268+
269+
See the [Proxy Chains RFD](./proxy-chains) for details on how MCP-over-ACP enables context-aware tooling.
270+
271+
### What if the agent doesn't support ACP transport?
272+
273+
See the [Bridging for agents without native support](#bridging-for-agents-without-native-support) section above. A bridge can transparently translate ACP-transport MCP servers to stdio or HTTP for agents that don't advertise `mcpCapabilities.acp` support.
274+
275+
### What about security?
276+
277+
MCP-over-ACP has the same trust model as regular MCP: you're allowing a component to handle tool invocations. The difference is transport, not trust. Components should only add MCP servers from sources they trust, same as with stdio or HTTP transport.
278+
279+
## Revision history
280+
281+
Split from proxy-chains RFD to enable independent use of MCP-over-ACP transport by any ACP component, not just proxies.

0 commit comments

Comments
 (0)