You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
_**NOTE:** This is a very early draft. Feel free to discuss changes, requirements, etc._
7
+
{{< callout type="info" >}}
8
+
**Protocol Revision**: 2024-11-05 (Final)
9
+
{{< /callout >}}
8
10
9
-
# Goal
10
-
The Model Context Protocol (MCP) is an attempt to allow implementors to provide context to various LLM surfaces such as IDEs, [Claude Desktop](https://claude.ai/download) and others, in a pluggable way. It separates the concerns of providing context from the LLM loop and its usage within.
11
+
The Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you're building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need. This specification defines the authoritative protocol requirements based on the TypeScript schema in [schema.ts](https://github.com/modelcontextprotocol/specification/blob/main/schema/schema.ts). For implementation guides and examples, visit [modelcontextprotocol.io](https://modelcontextprotocol.io).
11
12
12
-
This makes it **much** easier for anyone to script LLM applications for accomplishing their custom workflows, without the application needing to directly offer a large number of integrations.
13
+
## Overview
13
14
14
-
# Terminology
15
-
The Model Context Protocol is inspired by Microsoft's [Language Server Protocol](https://microsoft.github.io/language-server-protocol/), with similar concepts:
15
+
MCP provides a standardized way for applications to:
16
16
17
-
***Server**: a process or service providing context via MCP.
18
-
***Client**: the initiator and client connection to a single MCP server. A message sent through a client is always directed to its one corresponding server.
19
-
***Host**: a process or service which runs any number of MCP clients. [For example, your editor might be a host, claude.ai might be a host, etc.](#example-hosts)
20
-
***Session**: a stateful session established between one client and server.
21
-
***Message**: a message refers to one of the following types of [JSON-RPC](https://www.jsonrpc.org/) object:
22
-
***Request:** a request includes a `method` and `params`, and can be sent by either the server or the client, asking the other for some information or to perform some operation.
23
-
***Response:** a response includes a `result` or an `error`, and is sent *back* after a request, once processing has finished (successfully or unsuccessfully).
24
-
***Notification:** a special type of a request that does not expect a response, notifications are emitted by either the server or client to unilaterally inform the other of an event or state change.
25
-
***Capability**: a feature that the client or server supports. When an MCP connection is initiated, the client and server negotiate the capabilities that they both support, which affects the rest of the interaction.
17
+
- Share contextual information with language models
18
+
- Expose tools and capabilities to AI systems
19
+
- Build composable integrations and workflows
26
20
27
-
## Primitives
28
-
On top of the base protocol, MCP introduces these unique primitives:
21
+
The protocol uses JSON-RPC 2.0 messages to establish communication between:
29
22
30
-
***Resources**: anything that can be loaded as context for an LLM. *Servers* expose a list of resources, identified with [URIs](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier), which the *client* can choose to read or (if supported) subscribe to. Resources can be text or binary data—there are no restrictions on their content.
31
-
***Prompts**: prompts or prompt templates that the *server* can provide to the *client*, which the client can easily surface in the UI (e.g., as some sort of slash command).
32
-
***Tools**: functionality that the *client* can invoke on the *server*, to perform effectful operations. The client can choose to [expose these tools directly to the LLM](https://docs.anthropic.com/en/docs/build-with-claude/tool-use) too, allowing it to decide when and how to use them.
33
-
***Sampling**: *servers* can ask the *client* to sample from the LLM, which allows servers to implement agentic behaviors without having to implement sampling themselves. This also allows the client to combine the sampling request with *all of the other context it has*, making it much more intelligent—while avoiding needlessly exfiltrating information to servers.
23
+
-**Clients**: Applications that integrate with language models
24
+
-**Servers**: Services that provide context and capabilities
25
+
-**Hosts**: Processes that manage client connections
34
26
35
-
Each primitive can be summarized in the following control hierarchy:
In addition to basic primitives, MCP offers a set of control flow messages.
48
+
## Learn More
60
49
61
-
***Logging:** Anything related to how the server processes logs.
62
-
***Completion**: Supports completion of server arguments on the client side. See
50
+
Explore the detailed specification for each protocol component:
63
51
64
-
## Error Codes
65
-
66
-
MCP uses standard JSON-RPC error codes as well as protocol-specific codes:
67
-
68
-
Standard JSON-RPC error codes:
69
-
-`-32700`: Parse error
70
-
-`-32600`: Invalid request
71
-
-`-32601`: Method not found
72
-
-`-32602`: Invalid params
73
-
-`-32603`: Internal error
74
-
75
-
All error responses MUST include:
76
-
- A numeric error code
77
-
- A human-readable message
78
-
- Optional additional error data
79
-
80
-
Example error response:
81
-
```json
82
-
{
83
-
"jsonrpc": "2.0",
84
-
"id": 1,
85
-
"error": {
86
-
"code": -32602,
87
-
"message": "Required parameter missing",
88
-
"data": {
89
-
"parameter": "uri"
90
-
}
91
-
}
92
-
}
93
-
```
94
-
# Use cases
95
-
Most use cases are around enabling people to build their own specific workflows and integrations. MCP enables engineers and teams to **tailor AI to their needs.**
96
-
97
-
The beauty of the Model Context Protocol is that it's **extremely composable**. You can imagine mixing and matching *any number* of the example servers below with any one of the hosts. Each individual server can be quite simple and limited, but *composed together*, you can get a super-powered AI!
98
-
99
-
## Example servers
100
-
***File watcher**: read entire local directories, exposed as resources, and subscribe to changes. The server can provide a tool to write changes back to disk too!
101
-
***Screen watcher**: follow along with the user, taking screenshots automatically, and exposing those as resources. The host can use this to automatically attach screen captures to LLM context.
102
-
***Git integration**: could expose context like Git commit history, but probably *most* useful as a source of tools, like: "commit these changes," "merge this and resolve conflicts," etc.
103
-
***GitHub integration**: read and expose GitHub resources: files, commits, pull requests, issues, etc. Could also expose one or more tools to modify GitHub resources, like "create a PR."
104
-
***Asana integration**: similarly to GitHub—read/write Asana projects, tasks, etc.
105
-
***Slack integration**: read context from Slack channels. Could also look for specially tagged messages, or invocations of [shortcuts](https://api.slack.com/interactivity/shortcuts), as sources of context. Could expose tools to post messages to Slack.
106
-
***Google Workspace integration**: read and write emails, docs, etc.
107
-
***IDEs and editors**: IDEs and editors can be [servers](#example-hosts-clients) as well as hosts! As servers, they can be a rich source of context like: output/status of tests, [ASTs](https://en.wikipedia.org/wiki/Abstract_syntax_tree) and parse trees, and "which files are currently open and being edited?"
108
-
109
-
A key design principle of MCP is that it should be *as simple as possible* to implement a server. We want anyone to be able to write, e.g., a local Python script of 100 or fewer lines and get a fully functioning server, with capabilities comparable to any of the above.
110
-
111
-
## Example hosts
112
-
***IDEs and editors**: An MCP host inside an IDE or editor could support attaching any number of servers, which can be used to populate an in-editor LLM chat interface, as well as (e.g.) contextualize refactoring. In future, we could also imagine populating editors' command palette with all of the tools that MCP servers have made available.
113
-
***claude.ai**: [Claude.ai](https://claude.ai) can become an MCP host, allowing users to connect any number of MCP servers. Resources from those servers could be automatically made available for attaching to any Project or Chat. Claude could also make use of the tools exposed by MCP servers to implement agentic behaviors, saving artifacts to disk or to web services, etc.!
114
-
***Slack**: [Claude in Slack](https://www.anthropic.com/claude-in-slack) on steroids! Building an MCP host into Slack would open the door to much more complex interactions with LLMs via the platform—both in being able to read context from any number of places (for example, all the servers posited above), as well as being able to *take actions*, from Slack, via the tools that MCP servers expose.
115
-
116
-
# Protocol
117
-
## Initialization
118
-
MCP [sessions](lifecycle) begin with an initialization phase, where the client and server identify each other, and exchange information about their respective [capabilities](lifecycle#capability-descriptions).
119
-
120
-
The client can only begin requesting resources and invoking tools on the server, and the server can only begin requesting LLM sampling, after the client has issued the `initialized` notification:
121
-
122
-
```mermaid
123
-
sequenceDiagram
124
-
participant client as Client
125
-
participant server as Server
126
-
127
-
activate client
128
-
client -->>+ server: (connect over transport)
129
-
client -->> server: initialize
130
-
server -->> client: initialize_response
131
-
client --) server: initialized (notification)
132
-
133
-
loop while connected
134
-
alt client to server
135
-
client -->> server: request
136
-
server -->> client: response
137
-
else
138
-
client --) server: notification
139
-
else server to client
140
-
server -->> client: request
141
-
client -->> server: response
142
-
else
143
-
server --) client: notification
144
-
end
145
-
end
146
-
147
-
deactivate server
148
-
deactivate client
149
-
```
150
-
151
-
## Transports
152
-
An MCP server or client must implement one of the following transports. Different transports require different clients (but each can run within the same *host*).
153
-
154
-
### stdio
155
-
The client spawns the server process and manages its lifetime, and writes messages to it on the server's stdin. The server writes messages back to the client on its stdout.
156
-
157
-
Individual JSON-RPC messages are sent as newline-terminated JSON over the interface.
158
-
159
-
Anything the server writes to stderr MAY be captured as logging, but the client is also allowed to ignore it completely.
160
-
161
-
### SSE
162
-
A client can open a [Server-Sent Events](https://en.wikipedia.org/wiki/Server-sent_events) connection to a server, which the server will use to push all of its requests and responses to the client.
163
-
164
-
Upon connection, the server MUST issue a `endpoint` event (which is specific to MCP, not a default SSE event). The `data` associated with an `endpoint` event MUST be a URI for the client to use. The endpoint can be a relative or an absolute URI, but MUST always point to the same server origin. Cross-origin endpoints are not allowed, for security.
165
-
166
-
The client MUST issue individual JSON-RPC messages through the endpoint identified by the server, using HTTP POST requests—this allows the server to link these out-of-band messages with the ongoing SSE stream.
167
-
168
-
In turn, `message` events on the SSE stream will contain individual JSON-RPC messages from the server. The server MUST NOT send a `message` event until after the `endpoint` event has been issued.
169
-
170
-
This sequence diagram shows the MCP initialization flow over SSE, followed by open-ended communication between client and server, until ultimately the client disconnects:
171
-
172
-
```mermaid
173
-
sequenceDiagram
174
-
participant client as MCP Client
175
-
participant server as MCP Server
176
-
177
-
activate client
178
-
client->>+server: new EventSource("https://server/mcp")
client--)server: POST https://server/session?id=…<br />{InitializedNotification}
185
-
186
-
loop client requests and responses
187
-
client--)server: POST https://server/session?id=…<br />{…}
188
-
end
189
-
190
-
loop server requests and responses
191
-
server-)client: event: message<br />data: {…}
192
-
end
193
-
194
-
client -x server: EventSource.close()
195
-
196
-
deactivate server
197
-
deactivate client
198
-
```
199
-
200
-
## Security and T&S considerations
201
-
This model, while making meaningful changes to productivity and product experience, is effectively a form of arbitrary data access and arbitrary code execution.
202
-
203
-
**Every interaction between MCP host and server will need informed user consent.** For example:
204
-
205
-
* Servers must only expose user data as [resources](resources) with the user's explicit consent. Hosts must not transmit that data elsewhere without the user's explicit consent.
206
-
* Hosts must not invoke tools on servers without the user's explicit consent, and understanding of what the tool will do.
207
-
* When a server initiates [sampling](sampling) via a host, the user must have control over:
208
-
**Whether* sampling even occurs. (They may not want to be charged!)
209
-
* What the prompt that will actually be sampled is.
210
-
**What the server sees* of the completion when sampling finishes.
211
-
212
-
This latter point is why the sampling primitives do not permit MCP servers to see the whole prompt—instead, the host remains in control, and can censor or modify it at will.
0 commit comments