You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ We built Pydantic AI with one simple aim: to bring that FastAPI feeling to GenAI
39
39
[Pydantic Validation](https://docs.pydantic.dev/latest/) is the validation layer of the OpenAI SDK, the Google ADK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more. _Why use the derivative when you can go straight to the source?_:smiley:
40
40
41
41
2.**Model-agnostic**:
42
-
Supports virtually every [model](https://ai.pydantic.dev/models/overview) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius. If your favorite model or provider is not listed, you can easily implement a [custom model](https://ai.pydantic.dev/models/overview#custom-models).
42
+
Supports virtually every [model](https://ai.pydantic.dev/models/overview) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius, OVHcloud. If your favorite model or provider is not listed, you can easily implement a [custom model](https://ai.pydantic.dev/models/overview#custom-models).
43
43
44
44
3.**Seamless Observability**:
45
45
Tightly [integrates](https://ai.pydantic.dev/logfire) with [Pydantic Logfire](https://pydantic.dev/logfire), our general-purpose OpenTelemetry observability platform, for real-time debugging, evals-based performance monitoring, and behavior, tracing, and cost tracking. If you already have an observability platform that supports OTel, you can [use that too](https://ai.pydantic.dev/logfire#alternative-observability-backends).
Copy file name to clipboardExpand all lines: docs/builtin-tools.md
+145-1Lines changed: 145 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,6 +11,7 @@ Pydantic AI supports the following built-in tools:
11
11
-**[`ImageGenerationTool`][pydantic_ai.builtin_tools.ImageGenerationTool]**: Enables agents to generate images
12
12
-**[`UrlContextTool`][pydantic_ai.builtin_tools.UrlContextTool]**: Enables agents to pull URL contents into their context
13
13
-**[`MemoryTool`][pydantic_ai.builtin_tools.MemoryTool]**: Enables agents to use memory
14
+
-**[`MCPServerTool`][pydantic_ai.builtin_tools.MCPServerTool]**: Enables agents to use remote MCP servers with communication handled by the model provider
14
15
15
16
These tools are passed to the agent via the `builtin_tools` parameter and are executed by the model provider's infrastructure.
16
17
@@ -52,7 +53,7 @@ print(result.output)
52
53
53
54
_(This example is complete, it can be run "as is")_
54
55
55
-
With OpenAI, you must use their responses API to access the web search tool.
56
+
With OpenAI, you must use their Responses API to access the web search tool.
56
57
57
58
```py {title="web_search_openai.py"}
58
59
from pydantic_ai import Agent, WebSearchTool
@@ -419,6 +420,149 @@ print(result.output)
419
420
420
421
_(This example is complete, it can be run "as is")_
421
422
423
+
## MCP Server Tool
424
+
425
+
The [`MCPServerTool`][pydantic_ai.builtin_tools.MCPServerTool] allows your agent to use remote MCP servers with communication handled by the model provider.
426
+
427
+
This requires the MCP server to live at a public URL the provider can reach and does not support many of the advanced features of Pydantic AI's agent-side [MCP support](mcp/client.md),
428
+
but can result in optimized context use and caching, and faster performance due to the lack of a round-trip back to Pydantic AI.
429
+
430
+
### Provider Support
431
+
432
+
| Provider | Supported | Notes |
433
+
|----------|-----------|-----------------------|
434
+
| OpenAI Responses | ✅ | Full feature support. [Connectors](https://platform.openai.com/docs/guides/tools-connectors-mcp#connectors) can be used by specifying a special `x-openai-connector:<connector_id>` URL. |
435
+
| Anthropic | ✅ | Full feature support |
436
+
| Google | ❌ | Not supported |
437
+
| Groq | ❌ | Not supported |
438
+
| OpenAI Chat Completions | ❌ | Not supported |
439
+
| Bedrock | ❌ | Not supported |
440
+
| Mistral | ❌ | Not supported |
441
+
| Cohere | ❌ | Not supported |
442
+
| HuggingFace | ❌ | Not supported |
443
+
444
+
### Usage
445
+
446
+
```py {title="mcp_server_anthropic.py"}
447
+
from pydantic_ai import Agent, MCPServerTool
448
+
449
+
agent = Agent(
450
+
'anthropic:claude-sonnet-4-5',
451
+
builtin_tools=[
452
+
MCPServerTool(
453
+
id='deepwiki',
454
+
url='https://mcp.deepwiki.com/mcp', # (1)
455
+
)
456
+
]
457
+
)
458
+
459
+
result = agent.run_sync('Tell me about the pydantic/pydantic-ai repo.')
460
+
print(result.output)
461
+
"""
462
+
The pydantic/pydantic-ai repo is a Python agent framework for building Generative AI applications.
463
+
"""
464
+
```
465
+
466
+
1. The [DeepWiki MCP server](https://docs.devin.ai/work-with-devin/deepwiki-mcp) does not require authorization.
467
+
468
+
_(This example is complete, it can be run "as is")_
469
+
470
+
With OpenAI, you must use their Responses API to access the MCP server tool:
471
+
472
+
```py {title="mcp_server_openai.py"}
473
+
from pydantic_ai import Agent, MCPServerTool
474
+
475
+
agent = Agent(
476
+
'openai-responses:gpt-5',
477
+
builtin_tools=[
478
+
MCPServerTool(
479
+
id='deepwiki',
480
+
url='https://mcp.deepwiki.com/mcp', # (1)
481
+
)
482
+
]
483
+
)
484
+
485
+
result = agent.run_sync('Tell me about the pydantic/pydantic-ai repo.')
486
+
print(result.output)
487
+
"""
488
+
The pydantic/pydantic-ai repo is a Python agent framework for building Generative AI applications.
489
+
"""
490
+
```
491
+
492
+
1. The [DeepWiki MCP server](https://docs.devin.ai/work-with-devin/deepwiki-mcp) does not require authorization.
493
+
494
+
_(This example is complete, it can be run "as is")_
495
+
496
+
### Configuration Options
497
+
498
+
The `MCPServerTool` supports several configuration parameters for custom MCP servers:
result = agent.run_sync('Tell me about the pydantic/pydantic-ai repo.')
520
+
print(result.output)
521
+
"""
522
+
The pydantic/pydantic-ai repo is a Python agent framework for building Generative AI applications.
523
+
"""
524
+
```
525
+
526
+
1. The [GitHub MCP server](https://github.com/github/github-mcp-server) requires an authorization token.
527
+
528
+
For OpenAI Responses, you can use a [connector](https://platform.openai.com/docs/guides/tools-connectors-mcp#connectors) by specifying a special `x-openai-connector:` URL:
529
+
530
+
_(This example is complete, it can be run "as is")_
result = agent.run_sync('What do I have on my calendar today?')
549
+
print(result.output)
550
+
#> You're going to spend all day playing with Pydantic AI.
551
+
```
552
+
553
+
1. OpenAI's Google Calendar connector requires an [authorization token](https://platform.openai.com/docs/guides/tools-connectors-mcp#authorizing-a-connector).
554
+
555
+
_(This example is complete, it can be run "as is")_
556
+
557
+
#### Provider Support
558
+
559
+
| Parameter | OpenAI | Anthropic |
560
+
|-----------------------|--------|-----------|
561
+
|`authorization_token`| ✅ | ✅ |
562
+
|`allowed_tools`| ✅ | ✅ |
563
+
|`description`| ✅ | ❌ |
564
+
|`headers`| ✅ | ❌ |
565
+
422
566
## API Reference
423
567
424
568
For complete API documentation, see the [API Reference](api/builtin_tools.md).
Copy file name to clipboardExpand all lines: docs/durable_execution/prefect.md
+12-7Lines changed: 12 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -255,20 +255,23 @@ from prefect import flow
255
255
from pydantic_ai import Agent
256
256
from pydantic_ai.durable_exec.prefect import PrefectAgent
257
257
258
-
agent = Agent(
259
-
'openai:gpt-4o',
260
-
name='daily_report_agent',
261
-
instructions='Generate a daily summary report.',
262
-
)
263
-
264
-
prefect_agent = PrefectAgent(agent)
265
258
266
259
@flow
267
260
asyncdefdaily_report_flow(user_prompt: str):
268
261
"""Generate a daily report using the agent."""
262
+
agent = Agent( # (1)!
263
+
'openai:gpt-4o',
264
+
name='daily_report_agent',
265
+
instructions='Generate a daily summary report.',
266
+
)
267
+
268
+
prefect_agent = PrefectAgent(agent)
269
+
269
270
result =await prefect_agent.run(user_prompt)
270
271
return result.output
271
272
273
+
274
+
272
275
# Serve the flow with a daily schedule
273
276
if__name__=='__main__':
274
277
daily_report_flow.serve(
@@ -279,6 +282,8 @@ if __name__ == '__main__':
279
282
)
280
283
```
281
284
285
+
1. Each flow run executes in an isolated process, and all inputs and dependencies must be serializable. Because Agent instances cannot be serialized, instantiate the agent inside the flow rather than at the module level.
286
+
282
287
The `serve()` method accepts scheduling options:
283
288
284
289
-**`cron`**: Cron schedule string (e.g., `'0 9 * * *'` for daily at 9am)
0 commit comments