Skip to content

Commit ed061d9

Browse files
authored
instrument_mcp docs (#1412)
1 parent 1b48349 commit ed061d9

File tree

4 files changed

+68
-2
lines changed

4 files changed

+68
-2
lines changed
328 KB
Loading

docs/integrations/llms/mcp.md

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
---
2+
integration: logfire
3+
---
4+
# Model Context Protocol (MCP)
5+
6+
7+
**Logfire** supports instrumenting the [MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk) with the [`logfire.instrument_mcp()`][logfire.Logfire.instrument_mcp] method. This works on both the client and server side. If possible, calling this in both the client and server processes is recommended for nice distributed traces.
8+
9+
Below is a simple example. For the client, we use [Pydantic AI](https://ai.pydantic.dev/mcp/client/) (though any MCP client will work) and OpenAI. To use a different LLM provider instead of OpenAI, replace `openai:gpt-4o` in the client script with a different model name supported by Pydantic AI.
10+
11+
First, install the required dependencies:
12+
13+
```bash
14+
pip install mcp 'pydantic-ai-slim[openai]'
15+
```
16+
17+
Next, run the server script below:
18+
19+
```python title="server.py"
20+
from mcp.server.fastmcp import FastMCP
21+
22+
import logfire
23+
24+
logfire.configure(service_name='server')
25+
logfire.instrument_mcp()
26+
27+
app = FastMCP()
28+
29+
30+
@app.tool()
31+
def add(a: int, b: int) -> int:
32+
logfire.info(f'Calculating {a} + {b}')
33+
return a + b
34+
35+
36+
app.run(transport='streamable-http')
37+
```
38+
39+
Then run this client script in another terminal:
40+
41+
```python title="agent.py"
42+
from pydantic_ai import Agent
43+
from pydantic_ai.mcp import MCPServerStreamableHTTP
44+
45+
import logfire
46+
47+
logfire.configure(service_name='agent')
48+
logfire.instrument_pydantic_ai() # (1)!
49+
logfire.instrument_mcp()
50+
51+
server = MCPServerStreamableHTTP('http://localhost:8000/mcp')
52+
agent = Agent('openai:gpt-4o', toolsets=[server])
53+
result = agent.run_sync('What is 7 plus 5?')
54+
print(result.output)
55+
```
56+
57+
1. Instrumenting Pydantic AI is optional, but adds more context to the trace.
58+
59+
You should see a trace like this in Logfire:
60+
61+
![Logfire MCP Trace](../../images/logfire-screenshot-mcp.png)

logfire/_internal/main.py

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -913,10 +913,14 @@ def _warn_if_not_initialized_for_instrumentation(self):
913913
self.config.warn_if_not_initialized('Instrumentation will have no effect')
914914

915915
def instrument_mcp(self, *, propagate_otel_context: bool = True) -> None:
916-
"""Instrument [MCP](https://modelcontextprotocol.io/) requests such as tool calls.
916+
"""Instrument the [MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk).
917+
918+
Instruments both the client and server side. If possible, calling this in both the client and server
919+
processes is recommended for nice distributed traces.
917920
918921
Args:
919-
propagate_otel_context: Whether to enable propagation of the OpenTelemetry context.
922+
propagate_otel_context: Whether to enable propagation of the OpenTelemetry context
923+
for distributed tracing.
920924
Set to False to prevent setting extra fields like `traceparent` on the metadata of requests.
921925
"""
922926
from .integrations.mcp import instrument_mcp

mkdocs.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -117,6 +117,7 @@ nav:
117117
- Google Gen AI: integrations/llms/google-genai.md
118118
- Anthropic: integrations/llms/anthropic.md
119119
- LangChain: integrations/llms/langchain.md
120+
- MCP: integrations/llms/mcp.md
120121
- LLamaIndex: integrations/llms/llamaindex.md
121122
- Mirascope: integrations/llms/mirascope.md
122123
- LiteLLM: integrations/llms/litellm.md

0 commit comments

Comments
 (0)