Skip to content
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 0 additions & 28 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,6 @@ jobs:
- name: Install dependencies
run: uv sync --all-extras --all-packages --group lint

- uses: denoland/setup-deno@v2
with:
deno-version: v2.x

- uses: pre-commit/[email protected]
with:
extra_args: --all-files --verbose
Expand Down Expand Up @@ -272,29 +268,6 @@ jobs:
path: htmlcov
include-hidden-files: true

test-mcp-run-python:
runs-on: ubuntu-latest
timeout-minutes: 5
env:
UV_PYTHON: "3.12"
steps:
- uses: actions/checkout@v4

- uses: astral-sh/setup-uv@v5
with:
enable-cache: true

- uses: denoland/setup-deno@v2
with:
deno-version: v2.x

- run: make lint-js

- run: uv run --package mcp-run-python pytest mcp-run-python -v --durations=100

- run: deno task dev warmup
working-directory: mcp-run-python

# https://github.com/marketplace/actions/alls-green#why used for branch protection checks
check:
if: always()
Expand All @@ -307,7 +280,6 @@ jobs:
- test-lowest-versions
- test-examples
- coverage
- test-mcp-run-python
runs-on: ubuntu-latest

steps:
Expand Down
8 changes: 0 additions & 8 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,14 +44,6 @@ repos:
types: [python]
language: system
pass_filenames: false
- id: lint-js
name: Lint JS
entry: make
args: [lint-js]
language: system
types_or: [javascript, ts, json]
files: "^mcp-run-python/"
pass_filenames: false
- id: clai-help
name: clai help output
entry: uv
Expand Down
1 change: 0 additions & 1 deletion CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,6 @@ This is a uv workspace with multiple packages:
- **`pydantic_graph/`**: Graph execution engine
- **`examples/`**: Example applications
- **`clai/`**: CLI tool
- **`mcp-run-python/`**: MCP server implementation (Deno/TypeScript)

## Testing Strategy

Expand Down
15 changes: 1 addition & 14 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,8 @@
.pre-commit: ## Check that pre-commit is installed
@pre-commit -V || echo 'Please install pre-commit: https://pre-commit.com/'

.PHONY: .deno
.deno: ## Check that deno is installed
@deno --version > /dev/null 2>&1 || (printf "\033[0;31m✖ Error: deno is not installed, but is needed for mcp-run-python\033[0m\n Please install deno: https://docs.deno.com/runtime/getting_started/installation/\n" && exit 1)

.PHONY: install
install: .uv .pre-commit .deno ## Install the package, dependencies, and pre-commit for local development
install: .uv .pre-commit ## Install the package, dependencies, and pre-commit for local development
uv sync --frozen --all-extras --all-packages --group lint --group docs
pre-commit install --install-hooks

Expand All @@ -38,10 +34,6 @@ lint: ## Lint the code
uv run ruff format --check
uv run ruff check

.PHONY: lint-js
lint-js: ## Lint JS and TS code
cd mcp-run-python && deno task lint-format

.PHONY: typecheck-pyright
typecheck-pyright:
@# PYRIGHT_PYTHON_IGNORE_WARNINGS avoids the overhead of making a request to github on every invocation
Expand Down Expand Up @@ -77,11 +69,6 @@ testcov: test ## Run tests and generate an HTML coverage report
@echo "building coverage html"
@uv run coverage html

.PHONY: test-mrp
test-mrp: ## Build and tests of mcp-run-python
cd mcp-run-python && deno task build
uv run --package mcp-run-python pytest mcp-run-python -v

.PHONY: update-examples
update-examples: ## Update documentation examples
uv run -m pytest --update-examples tests/test_examples.py
Expand Down
50 changes: 25 additions & 25 deletions docs/mcp/client.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Pydantic AI comes with two ways to connect to MCP servers:
- [`MCPServerSSE`][pydantic_ai.mcp.MCPServerSSE] which connects to an MCP server using the [HTTP SSE](https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/transports/#http-with-sse) transport
- [`MCPServerStdio`][pydantic_ai.mcp.MCPServerStdio] which runs the server as a subprocess and connects to it using the [stdio](https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/transports/#stdio) transport

Examples of all three are shown below; [mcp-run-python](run-python.md) is used as the MCP server in all examples.
Examples of all three are shown below.

Each MCP server instance is a [toolset](../toolsets.md) and can be registered with an [`Agent`][pydantic_ai.Agent] using the `toolsets` argument.

Expand Down Expand Up @@ -59,9 +59,9 @@ agent = Agent('openai:gpt-4o', toolsets=[server]) # (2)!

async def main():
async with agent: # (3)!
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
result = await agent.run('What is 7 plus 5?')
print(result.output)
#> There are 9,208 days between January 1, 2000, and March 18, 2025.
#> The answer is 12.
```

1. Define the MCP server with the URL used to connect.
Expand Down Expand Up @@ -97,19 +97,26 @@ Will display as follows:
[`MCPServerSSE`][pydantic_ai.mcp.MCPServerSSE] connects over HTTP using the [HTTP + Server Sent Events transport](https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/transports/#http-with-sse) to a server.

!!! note
[`MCPServerSSE`][pydantic_ai.mcp.MCPServerSSE] requires an MCP server to be running and accepting HTTP connections before running the agent. Running the server is not managed by Pydantic AI.
The SSE transport in MCP is deprecated, you should use Streamable HTTP instead.

Before creating the SSE client, we need to run a server that supports the SSE transport.


The name "HTTP" is used since this implementation will be adapted in future to use the new
[Streamable HTTP](https://github.com/modelcontextprotocol/specification/pull/206) currently in development.
```python {title="sse_server.py" dunder_name="not_main"}
from mcp.server.fastmcp import FastMCP

app = FastMCP()

Before creating the SSE client, we need to run the server (docs [here](run-python.md)):
@app.tool()
def add(a: int, b: int) -> int:
return a + b

```bash {title="terminal (run sse server)"}
deno run \
-N -R=node_modules -W=node_modules --node-modules-dir=auto \
jsr:@pydantic/mcp-run-python sse
if __name__ == '__main__':
app.run(transport='sse')
```

Then we can create the client:

```python {title="mcp_sse_client.py"}
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerSSE
Expand All @@ -120,9 +127,9 @@ agent = Agent('openai:gpt-4o', toolsets=[server]) # (2)!

async def main():
async with agent: # (3)!
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
result = await agent.run('What is 7 plus 5?')
print(result.output)
#> There are 9,208 days between January 1, 2000, and March 18, 2025.
#> The answer is 12.
```

1. Define the MCP server with the URL used to connect.
Expand All @@ -133,23 +140,16 @@ _(This example is complete, it can be run "as is" — you'll need to add `asynci

### MCP "stdio" Server

The other transport offered by MCP is the [stdio transport](https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/transports/#stdio) where the server is run as a subprocess and communicates with the client over `stdin` and `stdout`. In this case, you'd use the [`MCPServerStdio`][pydantic_ai.mcp.MCPServerStdio] class.
MCP also offers [stdio transport](https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/transports/#stdio) where the server is run as a subprocess and communicates with the client over `stdin` and `stdout`. In this case, you'd use the [`MCPServerStdio`][pydantic_ai.mcp.MCPServerStdio] class.

In this example [mcp-run-python](https://github.com/pydantic/mcp-run-python) is used as the MCP server.

```python {title="mcp_stdio_client.py"}
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio

server = MCPServerStdio( # (1)!
'deno',
args=[
'run',
'-N',
'-R=node_modules',
'-W=node_modules',
'--node-modules-dir=auto',
'jsr:@pydantic/mcp-run-python',
'stdio',
]
'uv', args=['run', 'mcp-run-python', 'stdio'],
)
agent = Agent('openai:gpt-4o', toolsets=[server])

Expand All @@ -161,7 +161,7 @@ async def main():
#> There are 9,208 days between January 1, 2000, and March 18, 2025.
```

1. See [MCP Run Python](run-python.md) for more information.
1. See [MCP Run Python](https://github.com/pydantic/mcp-run-python) for more information.

## Tool call customisation

Expand Down
13 changes: 2 additions & 11 deletions docs/mcp/overview.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
# Model Context Protocol (MCP)

Pydantic AI supports [Model Context Protocol (MCP)](https://modelcontextprotocol.io) in three ways:
Pydantic AI supports [Model Context Protocol (MCP)](https://modelcontextprotocol.io) in two ways:

1. [Agents](../agents.md) act as an MCP Client, connecting to MCP servers to use their tools, [learn more …](client.md)
2. Agents can be used within MCP servers, [learn more …](server.md)
3. As part of Pydantic AI, we're building a number of MCP servers, [see below](#mcp-servers)

## What is MCP?

Expand All @@ -18,12 +17,4 @@ Some examples of what this means:

- Pydantic AI could use a web search service implemented as an MCP server to implement a deep research agent
- Cursor could connect to the [Pydantic Logfire](https://github.com/pydantic/logfire-mcp) MCP server to search logs, traces and metrics to gain context while fixing a bug
- Pydantic AI, or any other MCP client could connect to our [Run Python](run-python.md) MCP server to run arbitrary Python code in a sandboxed environment

## MCP Servers

To add functionality to Pydantic AI while making it as widely usable as possible, we're implementing some functionality as MCP servers.

So far, we've only implemented one MCP server as part of Pydantic AI:

- [Run Python](run-python.md): A sandboxed Python interpreter that can run arbitrary code, with a focus on security and safety.
- Pydantic AI, or any other MCP client could connect to our [Run Python](https://github.com/pydantic/mcp-run-python) MCP server to run arbitrary Python code in a sandboxed environment
Loading
Loading