Skip to content

Commit 84fc46a

Browse files
improved MCP documentation (#1171)
Co-authored-by: David Montague <[email protected]>
1 parent 4c0f384 commit 84fc46a

File tree

11 files changed

+353
-129
lines changed

11 files changed

+353
-129
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,3 +15,4 @@ examples/pydantic_ai_examples/.chat_app_messages.sqlite
1515
.vscode/
1616
/question_graph_history.json
1717
/docs-site/.wrangler/
18+
/CLAUDE.md
332 KB
Loading

docs/mcp/client.md

Lines changed: 103 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,103 @@
1+
# Client
2+
3+
PydanticAI can act as an [MCP client](https://modelcontextprotocol.io/quickstart/client), connecting to MCP servers
4+
to use their tools.
5+
6+
## Install
7+
8+
You need to either install [`pydantic-ai`](../install.md), or[`pydantic-ai-slim`](../install.md#slim-install) with the `mcp` optional group:
9+
10+
```bash
11+
pip/uv-add 'pydantic-ai-slim[mcp]'
12+
```
13+
14+
!!! note
15+
MCP integration requires Python 3.10 or higher.
16+
17+
## Usage
18+
19+
PydanticAI comes with two ways to connect to MCP servers:
20+
21+
- [`MCPServerSSE`][pydantic_ai.mcp.MCPServerSSE] which connects to an MCP server using the [HTTP SSE](https://modelcontextprotocol.io/docs/concepts/transports#server-sent-events-sse) transport
22+
- [`MCPServerStdio`][pydantic_ai.mcp.MCPServerStdio] which runs the server as a subprocess and connects to it using the [stdio](https://modelcontextprotocol.io/docs/concepts/transports#standard-input%2Foutput-stdio) transport
23+
24+
Examples of both are shown below; [mcp-run-python](run-python.md) is used as the MCP server in both examples.
25+
26+
### SSE Client
27+
28+
[`MCPServerSSE`][pydantic_ai.mcp.MCPServerSSE] connects over HTTP using the [HTTP + Server Sent Events transport](https://modelcontextprotocol.io/docs/concepts/transports#server-sent-events-sse) to a server.
29+
30+
!!! note
31+
[`MCPServerSSE`][pydantic_ai.mcp.MCPServerSSE] requires an MCP server to be running and accepting HTTP connections before calling [`agent.run_mcp_servers()`][pydantic_ai.Agent.run_mcp_servers]. Running the server is not managed by PydanticAI.
32+
33+
Before creating the SSE client, we need to run the server (docs [here](run-python.md)):
34+
35+
```bash {title="run_sse_server.py"}
36+
npx @pydantic/mcp-run-python sse
37+
```
38+
39+
```python {title="mcp_sse_client.py" py="3.10"}
40+
from pydantic_ai import Agent
41+
from pydantic_ai.mcp import MCPServerSSE
42+
43+
server = MCPServerSSE(url='http://localhost:3001/sse') # (1)!
44+
agent = Agent('openai:gpt-4o', mcp_servers=[server]) # (2)!
45+
46+
47+
async def main():
48+
async with agent.run_mcp_servers(): # (3)!
49+
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
50+
print(result.data)
51+
#> There are 9,208 days between January 1, 2000, and March 18, 2025.
52+
```
53+
54+
1. Define the MCP server with the URL used to connect.
55+
2. Create an agent with the MCP server attached.
56+
3. Create a client session to connect to the server.
57+
58+
_(This example is complete, it can be run "as is" with Python 3.10+ — you'll need to add `asyncio.run(main())` to run `main`)_
59+
60+
**What's happening here?**
61+
62+
* The model is receiving the prompt "how many days between 2000-01-01 and 2025-03-18?"
63+
* The model decides "Oh, I've got this `run_python_code` tool, that will be a good way to answer this question", and writes some python code to calculate the answer.
64+
* The model returns a tool call
65+
* PydanticAI sends the tool call to the MCP server using the SSE transport
66+
* The model is called again with the return value of running the code
67+
* The model returns the final answer
68+
69+
You can visualise this clearly, and even see the code that's run by adding three lines of code to instrument the example with [logfire](https://logfire.pydantic.dev/docs):
70+
71+
```python {title="mcp_sse_client_logfire.py" test="skip"}
72+
import logfire
73+
74+
logfire.configure()
75+
logfire.instrument_pydantic_ai()
76+
```
77+
78+
Will display as follows:
79+
80+
![Logfire run python code](../img/logfire-run-python-code.png)
81+
82+
### MCP "stdio" Server
83+
84+
The other transport offered by MCP is the [stdio transport](https://modelcontextprotocol.io/docs/concepts/transports#standard-input%2Foutput-stdio) where the server is run as a subprocess and communicates with the client over `stdin` and `stdout`. In this case, you'd use the [`MCPServerStdio`][pydantic_ai.mcp.MCPServerStdio] class.
85+
86+
!!! note
87+
When using [`MCPServerStdio`][pydantic_ai.mcp.MCPServerStdio] servers, the [`agent.run_mcp_servers()`][pydantic_ai.Agent.run_mcp_servers] context manager is responsible for starting and stopping the server.
88+
89+
90+
```python {title="mcp_stdio_client.py" py="3.10"}
91+
from pydantic_ai import Agent
92+
from pydantic_ai.mcp import MCPServerStdio
93+
94+
server = MCPServerStdio('npx', ['-y', '@pydantic/mcp-run-python', 'stdio'])
95+
agent = Agent('openai:gpt-4o', mcp_servers=[server])
96+
97+
98+
async def main():
99+
async with agent.run_mcp_servers():
100+
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
101+
print(result.data)
102+
#> There are 9,208 days between January 1, 2000, and March 18, 2025.
103+
```

docs/mcp/index.md

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
# Model Context Protocol (MCP)
2+
3+
PydanticAI supports [Model Context Protocol (MCP)](https://modelcontextprotocol.io) in three ways:
4+
5+
1. [Agents](../agents.md) act as an MCP Client, connecting to MCP servers to use their tools, [learn more …](client.md)
6+
2. Agents can be used within MCP servers, [learn more …](server.md)
7+
3. As part of PydanticAI, we're building a number of MCP servers, [see below](#mcp-servers)
8+
9+
## What is MCP?
10+
11+
The Model Context Protocol is a standardized protocol that allow AI applications (including programmatic agents like PydanticAI, coding agents like [cursor](https://www.cursor.com/), and desktop applications like [Claude Desktop](https://claude.ai/download)) to connect to external tools and services using a common interface.
12+
13+
As with other protocols, the dream of MCP is that a wide range of applications can speak to each other without the need for specific integrations.
14+
15+
There is a great list of MCP servers at [github.com/modelcontextprotocol/servers](https://github.com/modelcontextprotocol/servers).
16+
17+
Some examples of what this means:
18+
19+
* PydanticAI could use a web search service implemented as an MCP server to implement a deep research agent
20+
* Cursor could connect to the [Pydantic Logfire](https://github.com/pydantic/logfire-mcp) MCP server to search logs, traces and metrics to gain context while fixing a bug
21+
* PydanticAI, or any other MCP client could connect to our [Run Python](run-python.md) MCP server to run arbitrary Python code in a sandboxed environment
22+
23+
## MCP Servers
24+
25+
To add functionality to PydanticAI while making it as widely usable as possible, we're implementing some functionality as MCP servers.
26+
27+
So far, we've only implemented one MCP server as part of PydanticAI:
28+
29+
* [Run Python](run-python.md): A sandboxed Python interpreter that can run arbitrary code, with a focus on security and safety.

docs/mcp/run-python.md

Lines changed: 146 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,146 @@
1+
# MCP Run Python
2+
3+
The **MCP Run Python** package is an MCP server that allows agents to execute Python code in a secure, sandboxed environment. It uses [Pyodide](https://pyodide.org/) to run Python code in a JavaScript environment, isolating execution from the host system.
4+
5+
## Features
6+
7+
* **Secure Execution**: Run Python code in a sandboxed WebAssembly environment
8+
* **Package Management**: Automatically detects and installs required dependencies
9+
* **Complete Results**: Captures standard output, standard error, and return values
10+
* **Asynchronous Support**: Runs async code properly
11+
* **Error Handling**: Provides detailed error reports for debugging
12+
13+
## Installation
14+
15+
The MCP Run Python server is distributed as an [NPM package](https://www.npmjs.com/package/@pydantic/mcp-run-python) and can be run directly using [`npx`](https://docs.npmjs.com/cli/v8/commands/npx):
16+
17+
```bash
18+
npx @pydantic/mcp-run-python [stdio|sse]
19+
```
20+
21+
Where:
22+
23+
* `stdio`: Runs the server with [stdin/stdout transport](https://modelcontextprotocol.io/docs/concepts/transports#standard-input%2Foutput-stdio) (for subprocess usage)
24+
* `sse`: Runs the server with [HTTP Server-Sent Events transport](https://modelcontextprotocol.io/docs/concepts/transports#server-sent-events-sse) (for remote connections)
25+
26+
Usage of `@pydantic/mcp-run-python` with PydanticAI is described in the [client](client.md#mcp-stdio-server) documentation.
27+
28+
## Direct Usage
29+
30+
As well as using this server with PydanticAI, it can be connected to other MCP clients. For clarity, in this example we connect directly using the [Python MCP client](https://github.com/modelcontextprotocol/python-sdk).
31+
32+
```python {title="mcp_run_python.py" py="3.10"}
33+
from mcp import ClientSession, StdioServerParameters
34+
from mcp.client.stdio import stdio_client
35+
36+
code = """
37+
import numpy
38+
a = numpy.array([1, 2, 3])
39+
print(a)
40+
a
41+
"""
42+
43+
44+
async def main():
45+
server_params = StdioServerParameters(
46+
command='npx', args=['-y', '@pydantic/mcp-run-python', 'stdio']
47+
)
48+
async with stdio_client(server_params) as (read, write):
49+
async with ClientSession(read, write) as session:
50+
await session.initialize()
51+
tools = await session.list_tools()
52+
print(len(tools.tools))
53+
#> 1
54+
print(repr(tools.tools[0].name))
55+
#> 'run_python_code'
56+
print(repr(tools.tools[0].inputSchema))
57+
"""
58+
{'type': 'object', 'properties': {'python_code': {'type': 'string', 'description': 'Python code to run'}}, 'required': ['python_code'], 'additionalProperties': False, '$schema': 'http://json-schema.org/draft-07/schema#'}
59+
"""
60+
result = await session.call_tool('run_python_code', {'python_code': code})
61+
print(result.content[0].text)
62+
"""
63+
<status>success</status>
64+
<dependencies>["numpy"]</dependencies>
65+
<output>
66+
[1 2 3]
67+
</output>
68+
<return_value>
69+
[
70+
1,
71+
2,
72+
3
73+
]
74+
</return_value>
75+
"""
76+
```
77+
78+
## Dependencies
79+
80+
Dependencies are installed when code is run.
81+
82+
Dependencies can be defined in one of two ways:
83+
84+
### Inferred from imports
85+
86+
If there's no metadata, dependencies are inferred from imports in the code,
87+
as shown in the example [above](#direct-usage).
88+
89+
### Inline script metadata
90+
91+
As introduced in PEP 723, explained [here](https://packaging.python.org/en/latest/specifications/inline-script-metadata/#inline-script-metadata), and popularized by [uv](https://docs.astral.sh/uv/guides/scripts/#declaring-script-dependencies) — dependencies can be defined in a comment at the top of the file.
92+
93+
This allows use of dependencies that aren't imported in the code, and is more explicit.
94+
95+
```py {title="inline_script_metadata.py" py="3.10"}
96+
from mcp import ClientSession, StdioServerParameters
97+
from mcp.client.stdio import stdio_client
98+
99+
code = """\
100+
# /// script
101+
# dependencies = ["pydantic", "email-validator"]
102+
# ///
103+
import pydantic
104+
105+
class Model(pydantic.BaseModel):
106+
email: pydantic.EmailStr
107+
108+
print(Model(email='[email protected]'))
109+
"""
110+
111+
112+
async def main():
113+
server_params = StdioServerParameters(
114+
command='npx', args=['-y', '@pydantic/mcp-run-python', 'stdio']
115+
)
116+
async with stdio_client(server_params) as (read, write):
117+
async with ClientSession(read, write) as session:
118+
await session.initialize()
119+
result = await session.call_tool('run_python_code', {'python_code': code})
120+
print(result.content[0].text)
121+
"""
122+
<status>success</status>
123+
<dependencies>["pydantic","email-validator"]</dependencies>
124+
<output>
125+
126+
</output>
127+
"""
128+
```
129+
130+
It also allows versions to be pinned for non-binary packages (Pyodide only supports a single version for the binary packages it supports, like `pydantic` and `numpy`).
131+
132+
E.g. you could set the dependencies to
133+
134+
```python
135+
# /// script
136+
# dependencies = ["rich<13"]
137+
# ///
138+
```
139+
140+
## Logging
141+
142+
MCP Run Python supports emitting stdout and stderr from the python execution as [MCP logging messages](https://github.com/modelcontextprotocol/specification/blob/eb4abdf2bb91e0d5afd94510741eadd416982350/docs/specification/draft/server/utilities/logging.md?plain=1).
143+
144+
For logs to be emitted you must set the logging level when connecting to the server. By default, the log level is set to the highest level, `emergency`.
145+
146+
Currently, it's not possible to demonstrate this due to a bug in the Python MCP Client, see [modelcontextprotocol/python-sdk#201](https://github.com/modelcontextprotocol/python-sdk/issues/201#issuecomment-2727663121).

docs/mcp/server.md

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
# Server
2+
3+
PydanticAI models can also be used within MCP Servers.
4+
5+
Here's a simple example of a [Python MCP server](https://github.com/modelcontextprotocol/python-sdk) using PydanticAI within a tool call:
6+
7+
```py {title="mcp_server.py" py="3.10"}
8+
from mcp.server.fastmcp import FastMCP
9+
10+
from pydantic_ai import Agent
11+
12+
server = FastMCP('PydanticAI Server')
13+
server_agent = Agent(
14+
'anthropic:claude-3-5-haiku-latest', system_prompt='always reply in rhyme'
15+
)
16+
17+
18+
@server.tool()
19+
async def poet(theme: str) -> str:
20+
"""Poem generator"""
21+
r = await server_agent.run(f'write a poem about {theme}')
22+
return r.data
23+
24+
25+
if __name__ == '__main__':
26+
server.run()
27+
```
28+
29+
This server can be queried with any MCP client. Here is an example using a direct Python client:
30+
31+
```py {title="mcp_client.py" py="3.10"}
32+
import asyncio
33+
import os
34+
35+
from mcp import ClientSession, StdioServerParameters
36+
from mcp.client.stdio import stdio_client
37+
38+
39+
async def client():
40+
server_params = StdioServerParameters(
41+
command='uv', args=['run', 'mcp_server.py', 'server'], env=os.environ
42+
)
43+
async with stdio_client(server_params) as (read, write):
44+
async with ClientSession(read, write) as session:
45+
await session.initialize()
46+
result = await session.call_tool('poet', {'theme': 'socks'})
47+
print(result.content[0].text)
48+
"""
49+
Oh, socks, those garments soft and sweet,
50+
That nestle softly 'round our feet,
51+
From cotton, wool, or blended thread,
52+
They keep our toes from feeling dread.
53+
"""
54+
55+
56+
if __name__ == '__main__':
57+
asyncio.run(client())
58+
```
59+
60+
Note: [sampling](https://modelcontextprotocol.io/docs/concepts/sampling#sampling), whereby servers may request LLM completions from the client, is not yet supported in PydanticAI.

0 commit comments

Comments
 (0)