Skip to content

Commit d06853c

Browse files
committed
Initial import
0 parents  commit d06853c

23 files changed

+4531
-0
lines changed

.github/workflows/main.yml

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
on:
2+
push:
3+
branches:
4+
- main
5+
6+
pull_request:
7+
8+
jobs:
9+
validate:
10+
runs-on: ubuntu-latest
11+
12+
steps:
13+
- uses: actions/checkout@v4
14+
- uses: actions/setup-node@v2
15+
with:
16+
node-version: 20
17+
cache: yarn
18+
19+
- run: yarn install --immutable
20+
21+
- run: yarn validate:schema
22+
23+
- run: yarn generate:json
24+
- name: Verify that `yarn generate:json` did not change outputs (if it did, please re-run it and re-commit!)
25+
run: git diff --exit-code
26+
27+
- run: yarn validate:examples

.github/workflows/site.yml

Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,79 @@
1+
# Sample workflow for building and deploying a Hugo site to GitHub Pages
2+
name: Deploy Hugo site to Pages
3+
4+
on:
5+
# Runs on pushes targeting the default branch
6+
push:
7+
branches:
8+
- main
9+
10+
# Allows you to run this workflow manually from the Actions tab
11+
workflow_dispatch:
12+
13+
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
14+
permissions:
15+
contents: read
16+
pages: write
17+
id-token: write
18+
19+
# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
20+
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
21+
concurrency:
22+
group: "pages"
23+
cancel-in-progress: false
24+
25+
# Default to bash
26+
defaults:
27+
run:
28+
shell: bash
29+
30+
jobs:
31+
# Build job
32+
build:
33+
runs-on: ubuntu-latest
34+
env:
35+
HUGO_VERSION: 0.134.2
36+
steps:
37+
- name: Install Hugo CLI
38+
run: |
39+
wget -O ${{ runner.temp }}/hugo.deb https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_extended_${HUGO_VERSION}_linux-amd64.deb \
40+
&& sudo dpkg -i ${{ runner.temp }}/hugo.deb
41+
- name: Install Dart Sass
42+
run: sudo snap install dart-sass
43+
- name: Checkout
44+
uses: actions/checkout@v4
45+
with:
46+
submodules: recursive
47+
fetch-depth: 0
48+
- name: Setup Pages
49+
id: pages
50+
uses: actions/configure-pages@v5
51+
- name: Install Node.js dependencies
52+
run: "[[ -f site/package-lock.json || -f site/npm-shrinkwrap.json ]] && npm ci --prefix site || true"
53+
- name: Build with Hugo
54+
env:
55+
HUGO_CACHEDIR: ${{ runner.temp }}/hugo_cache
56+
HUGO_ENVIRONMENT: production
57+
TZ: America/Los_Angeles
58+
run: |
59+
hugo \
60+
--gc \
61+
--minify \
62+
--baseURL "${{ steps.pages.outputs.base_url }}/" \
63+
-s site
64+
- name: Upload artifact
65+
uses: actions/upload-pages-artifact@v3
66+
with:
67+
path: ./site/public
68+
69+
# Deployment job
70+
deploy:
71+
environment:
72+
name: github-pages
73+
url: ${{ steps.deployment.outputs.page_url }}
74+
runs-on: ubuntu-latest
75+
needs: build
76+
steps:
77+
- name: Deploy to GitHub Pages
78+
id: deployment
79+
uses: actions/deploy-pages@v4

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
node_modules/

.nvmrc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
v20.16.0

README.md

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
# Model Context Protocol spec
2+
3+
This repo contains the specification and protocol schema for the Model Context Protocol.
4+
5+
The schema is [defined in TypeScript](schema/schema.ts) first, but [made available as JSON Schema](schema/schema.json) as well, for wider compatibility.
6+
7+
## Prequisites
8+
9+
The following software is required to work on the spec:
10+
11+
* Typescript
12+
* Node.JS 20 or above
13+
* Yarn
14+
* Typescript JSON Schema (for generating json schema)
15+
* Hugo (optionally, for serving the documentation)
16+
* Go (optionall, for serving the documentation)
17+
* nvm (optionally, for managing node versions)
18+
19+
## Validating and building the spec
20+
The following commands install the dependencies, validate the schema and generate the json schema.
21+
22+
```bash
23+
$ nvn install # install the correct node version
24+
$ yarn install # install dependencies
25+
$ yarn validate:schema # validate the schema
26+
$ yarn validate:examples # validate the examples
27+
$ yarn generate:json # generate the json schema
28+
```
29+
30+
## Serving the documentation
31+
The documentation lives in the `docs` folder. To serve the documentation, run the following commands:
32+
33+
```bash
34+
$ yarn serve:docs # serve the documentation
35+
```
36+
37+
Note that this reqires Hugo and Go to be installed.

docs/guide/_index.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
title: Guide
3+
cascade:
4+
type: docs
5+
---

docs/spec/_index.md

Lines changed: 171 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,171 @@
1+
---
2+
title: Model Cotext Protocol Specification
3+
cascade:
4+
type: docs
5+
---
6+
7+
_**NOTE:** This is a very early draft. Feel free to discuss changes, requirements, etc._
8+
9+
# Goal
10+
The Model Context Protocol (MCP) is an attempt to allow implementors to provide context to various LLM surfaces such as editors/IDEs, [claude.ai](https://claude.ai), etc., in a pluggable way. It separates the concerns of providing context from the LLM loop and its usage within.
11+
12+
This makes it **much** easier for anyone to script LLM applications for accomplishing their custom workflows, without the application needing to directly offer a large number of integrations.
13+
14+
# Terminology
15+
The Model Context Protocol is inspired by Microsoft's [Language Server Protocol](https://microsoft.github.io/language-server-protocol/), with similar concepts:
16+
17+
* **Server**: a process or service providing context via MCP.
18+
* **Client**: the initiator and client connection to a single MCP server. A message sent through a client is always directed to its one corresponding server.
19+
* **Host**: a process or service which runs any number of MCP clients. [For example, your editor might be a host, claude.ai might be a host, etc.](#example-hosts)
20+
* **Session**: a stateful session established between one client and server.
21+
* **Message**: a message refers to one of the following types of [JSON-RPC](https://www.jsonrpc.org/) object:
22+
* **Request:** a request includes a `method` and `params`, and can be sent by either the server or the client, asking the other for some information or to perform some operation.
23+
* **Response:** a response includes a `result` or an `error`, and is sent *back* after a request, once processing has finished (successfully or unsuccessfully).
24+
* **Notification:** a special type of a request that does not expect a response, notifications are emitted by either the server or client to unilaterally inform the other of an event or state change.
25+
* **Capability**: a feature that the client or server supports. When an MCP connection is initiated, the client and server negotiate the capabilities that they both support, which affects the rest of the interaction.
26+
27+
## Primitives
28+
On top of the base protocol, MCP introduces these unique primitives:
29+
30+
* **Resources**: anything that can be loaded as context for an LLM. *Servers* expose a list of resources, identified with [URIs](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier), which the *client* can choose to read or (if supported) subscribe to. Resources can be text or binary data—there are no restrictions on their content.
31+
* **Prompts**: prompts or prompt templates that the *server* can provide to the *client*, which the client can easily surface in the UI (e.g., as some sort of slash command).
32+
* **Tools**: functionality that the *client* can invoke on the *server*, to perform effectful operations. The client can choose to [expose these tools directly to the LLM](https://docs.anthropic.com/en/docs/build-with-claude/tool-use) too, allowing it to decide when and how to use them.
33+
* **Sampling**: *servers* can ask the *client* to sample from the LLM, which allows servers to implement agentic behaviors without having to implement sampling themselves. This also allows the client to combine the sampling request with *all of the other context it has*, making it much more intelligent—while avoiding needlessly exfiltrating information to servers.
34+
35+
```mermaid
36+
flowchart LR
37+
server["Server\n<i>Script on local machine, web service, etc.</i>"]
38+
host["Host (client)\n<i>Editors, claude.ai, etc.</i>"]
39+
40+
host -->|"resource queries"| server
41+
host -->|"invoke tools"| server
42+
server -->|"sampling"| host
43+
server -.->|"prompt templates"| host
44+
server -.->|"resource contents"| host
45+
```
46+
47+
## Control
48+
In addition to basic primitives, MCP offers a set of control flow messages.
49+
50+
* **Logging:** Anything related to how the server processes logs.
51+
* **Completion**: Supports completion of server arguments on the client side. See
52+
53+
# Use cases
54+
Most use cases are around enabling people to build their own specific workflows and integrations. MCP enables engineers and teams to **tailor AI to their needs.**
55+
56+
The beauty of the Model Context Protocol is that it's **extremely composable**. You can imagine mixing and matching *any number* of the example servers below with any one of the hosts. Each individual server can be quite simple and limited, but *composed together*, you can get a super-powered AI!
57+
58+
## Example servers
59+
* **File watcher**: read entire local directories, exposed as resources, and subscribe to changes. The server can provide a tool to write changes back to disk too!
60+
* **Screen watcher**: follow along with the user, taking screenshots automatically, and exposing those as resources. The host can use this to automatically attach screen captures to LLM context.
61+
* **Git integration**: could expose context like Git commit history, but probably *most* useful as a source of tools, like: "commit these changes," "merge this and resolve conflicts," etc.
62+
* **GitHub integration**: read and expose GitHub resources: files, commits, pull requests, issues, etc. Could also expose one or more tools to modify GitHub resources, like "create a PR."
63+
* **Asana integration**: similarly to GitHub—read/write Asana projects, tasks, etc.
64+
* **Slack integration**: read context from Slack channels. Could also look for specially tagged messages, or invocations of [shortcuts](https://api.slack.com/interactivity/shortcuts), as sources of context. Could expose tools to post messages to Slack.
65+
* **Google Workspace integration**: read and write emails, docs, etc.
66+
* **IDEs and editors**: IDEs and editors can be [servers](#example-hosts-clients) as well as hosts! As servers, they can be a rich source of context like: output/status of tests, [ASTs](https://en.wikipedia.org/wiki/Abstract_syntax_tree) and parse trees, and "which files are currently open and being edited?"
67+
68+
A key design principle of MCP is that it should be *as simple as possible* to implement a server. We want anyone to be able to write, e.g., a local Python script of 100 or fewer lines and get a fully functioning server, with capabilities comparable to any of the above.
69+
70+
## Example hosts
71+
* **IDEs and editors**: An MCP host inside an IDE or editor could support attaching any number of servers, which can be used to populate an in-editor LLM chat interface, as well as (e.g.) contextualize refactoring. In future, we could also imagine populating editors' command palette with all of the tools that MCP servers have made available.
72+
* **claude.ai**: [Claude.ai](https://claude.ai) can become an MCP host, allowing users to connect any number of MCP servers. Resources from those servers could be automatically made available for attaching to any Project or Chat. Claude could also make use of the tools exposed by MCP servers to implement agentic behaviors, saving artifacts to disk or to web services, etc.!
73+
* **Slack**: [Claude in Slack](https://www.anthropic.com/claude-in-slack) on steroids! Building an MCP host into Slack would open the door to much more complex interactions with LLMs via the platform—both in being able to read context from any number of places (for example, all the servers posited above), as well as being able to *take actions*, from Slack, via the tools that MCP servers expose.
74+
75+
# Protocol
76+
## Initialization
77+
MCP [sessions](lifecycle) begin with an initialization phase, where the client and server identify each other, and exchange information about their respective [capabilities](lifecycle#capability-descriptions).
78+
79+
The client can only begin requesting resources and invoking tools on the server, and the server can only begin requesting LLM sampling, after the client has issued the `initialized` notification:
80+
81+
```mermaid
82+
sequenceDiagram
83+
participant client as Client
84+
participant server as Server
85+
86+
activate client
87+
client -->>+ server: (connect over transport)
88+
client -->> server: initialize
89+
server -->> client: initialize_response
90+
client --) server: initialized (notification)
91+
92+
loop while connected
93+
alt client to server
94+
client -->> server: request
95+
server -->> client: response
96+
else
97+
client --) server: notification
98+
else server to client
99+
server -->> client: request
100+
client -->> server: response
101+
else
102+
server --) client: notification
103+
end
104+
end
105+
106+
deactivate server
107+
deactivate client
108+
```
109+
110+
## Transports
111+
An MCP server or client must implement one of the following transports. Different transports require different clients (but each can run within the same *host*).
112+
113+
### stdio
114+
The client spawns the server process and manages its lifetime, and writes messages to it on the server's stdin. The server writes messages back to the client on its stdout.
115+
116+
Individual JSON-RPC messages are sent as newline-terminated JSON over the interface.
117+
118+
Anything the server writes to stderr MAY be captured as logging, but the client is also allowed to ignore it completely.
119+
120+
### SSE
121+
A client can open a [Server-Sent Events](https://en.wikipedia.org/wiki/Server-sent_events) connection to a server, which the server will use to push all of its requests and responses to the client.
122+
123+
Upon connection, the server MUST issue a `endpoint` event (which is specific to MCP, not a default SSE event). The `data` associated with an `endpoint` event MUST be a URI for the client to use. The endpoint can be a relative or an absolute URI, but MUST always point to the same server origin. Cross-origin endpoints are not allowed, for security.
124+
125+
The client MUST issue individual JSON-RPC messages through the endpoint identified by the server, using HTTP POST requests—this allows the server to link these out-of-band messages with the ongoing SSE stream.
126+
127+
In turn, `message` events on the SSE stream will contain individual JSON-RPC messages from the server. The server MUST NOT send a `message` event until after the `endpoint` event has been issued.
128+
129+
This sequence diagram shows the MCP initialization flow over SSE, followed by open-ended communication between client and server, until ultimately the client disconnects:
130+
131+
```mermaid
132+
sequenceDiagram
133+
participant client as MCP Client
134+
participant server as MCP Server
135+
136+
activate client
137+
client->>+server: new EventSource("https://server/mcp")
138+
server->>client: event: endpoint<br />data: /session?id=8cfc516c-…
139+
140+
client-->>server: POST https://server/session?id=…<br />{InitializeRequest}
141+
server->>client: event: message<br />data: {InitializeResponse}
142+
143+
client--)server: POST https://server/session?id=…<br />{InitializedNotification}
144+
145+
loop client requests and responses
146+
client--)server: POST https://server/session?id=…<br />{…}
147+
end
148+
149+
loop server requests and responses
150+
server-)client: event: message<br />data: {…}
151+
end
152+
153+
client -x server: EventSource.close()
154+
155+
deactivate server
156+
deactivate client
157+
```
158+
159+
## Security and T&S considerations
160+
This model, while making meaningful changes to productivity and product experience, is effectively a form of arbitrary data access and arbitrary code execution.
161+
162+
**Every interaction between MCP host and server will need informed user consent.** For example:
163+
164+
* Servers must only expose user data as [resources](resources) with the user's explicit consent. Hosts must not transmit that data elsewhere without the user's explicit consent.
165+
* Hosts must not invoke tools on servers without the user's explicit consent, and understanding of what the tool will do.
166+
* When a server initiates [sampling](sampling) via a host, the user must have control over:
167+
* *Whether* sampling even occurs. (They may not want to be charged!)
168+
* What the prompt that will actually be sampled is.
169+
* *What the server sees* of the completion when sampling finishes.
170+
171+
This latter point is why the sampling primitives do not permit MCP servers to see the whole prompt—instead, the host remains in control, and can censor or modify it at will.

0 commit comments

Comments
 (0)