Skip to content

Commit db1630d

Browse files
authored
Merge branch 'main' into main
2 parents ab9d690 + faa3868 commit db1630d

File tree

86 files changed

+12262
-1392
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

86 files changed

+12262
-1392
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ We built Pydantic AI with one simple aim: to bring that FastAPI feeling to GenAI
3939
[Pydantic Validation](https://docs.pydantic.dev/latest/) is the validation layer of the OpenAI SDK, the Google ADK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more. _Why use the derivative when you can go straight to the source?_ :smiley:
4040

4141
2. **Model-agnostic**:
42-
Supports virtually every [model](https://ai.pydantic.dev/models/overview) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius. If your favorite model or provider is not listed, you can easily implement a [custom model](https://ai.pydantic.dev/models/overview#custom-models).
42+
Supports virtually every [model](https://ai.pydantic.dev/models/overview) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius, OVHcloud. If your favorite model or provider is not listed, you can easily implement a [custom model](https://ai.pydantic.dev/models/overview#custom-models).
4343

4444
3. **Seamless Observability**:
4545
Tightly [integrates](https://ai.pydantic.dev/logfire) with [Pydantic Logfire](https://pydantic.dev/logfire), our general-purpose OpenTelemetry observability platform, for real-time debugging, evals-based performance monitoring, and behavior, tracing, and cost tracking. If you already have an observability platform that supports OTel, you can [use that too](https://ai.pydantic.dev/logfire#alternative-observability-backends).

docs/agents.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -736,7 +736,7 @@ try:
736736
except UnexpectedModelBehavior as e:
737737
print(e) # (1)!
738738
"""
739-
Safety settings triggered, body:
739+
Content filter 'SAFETY' triggered, body:
740740
<safety settings details>
741741
"""
742742
```

docs/api/providers.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,3 +43,5 @@
4343
::: pydantic_ai.providers.litellm.LiteLLMProvider
4444

4545
::: pydantic_ai.providers.nebius.NebiusProvider
46+
47+
::: pydantic_ai.providers.ovhcloud.OVHcloudProvider

docs/builtin-tools.md

Lines changed: 145 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ Pydantic AI supports the following built-in tools:
1111
- **[`ImageGenerationTool`][pydantic_ai.builtin_tools.ImageGenerationTool]**: Enables agents to generate images
1212
- **[`UrlContextTool`][pydantic_ai.builtin_tools.UrlContextTool]**: Enables agents to pull URL contents into their context
1313
- **[`MemoryTool`][pydantic_ai.builtin_tools.MemoryTool]**: Enables agents to use memory
14+
- **[`MCPServerTool`][pydantic_ai.builtin_tools.MCPServerTool]**: Enables agents to use remote MCP servers with communication handled by the model provider
1415

1516
These tools are passed to the agent via the `builtin_tools` parameter and are executed by the model provider's infrastructure.
1617

@@ -52,7 +53,7 @@ print(result.output)
5253

5354
_(This example is complete, it can be run "as is")_
5455

55-
With OpenAI, you must use their responses API to access the web search tool.
56+
With OpenAI, you must use their Responses API to access the web search tool.
5657

5758
```py {title="web_search_openai.py"}
5859
from pydantic_ai import Agent, WebSearchTool
@@ -419,6 +420,149 @@ print(result.output)
419420

420421
_(This example is complete, it can be run "as is")_
421422

423+
## MCP Server Tool
424+
425+
The [`MCPServerTool`][pydantic_ai.builtin_tools.MCPServerTool] allows your agent to use remote MCP servers with communication handled by the model provider.
426+
427+
This requires the MCP server to live at a public URL the provider can reach and does not support many of the advanced features of Pydantic AI's agent-side [MCP support](mcp/client.md),
428+
but can result in optimized context use and caching, and faster performance due to the lack of a round-trip back to Pydantic AI.
429+
430+
### Provider Support
431+
432+
| Provider | Supported | Notes |
433+
|----------|-----------|-----------------------|
434+
| OpenAI Responses || Full feature support. [Connectors](https://platform.openai.com/docs/guides/tools-connectors-mcp#connectors) can be used by specifying a special `x-openai-connector:<connector_id>` URL. |
435+
| Anthropic || Full feature support |
436+
| Google || Not supported |
437+
| Groq || Not supported |
438+
| OpenAI Chat Completions || Not supported |
439+
| Bedrock || Not supported |
440+
| Mistral || Not supported |
441+
| Cohere || Not supported |
442+
| HuggingFace || Not supported |
443+
444+
### Usage
445+
446+
```py {title="mcp_server_anthropic.py"}
447+
from pydantic_ai import Agent, MCPServerTool
448+
449+
agent = Agent(
450+
'anthropic:claude-sonnet-4-5',
451+
builtin_tools=[
452+
MCPServerTool(
453+
id='deepwiki',
454+
url='https://mcp.deepwiki.com/mcp', # (1)
455+
)
456+
]
457+
)
458+
459+
result = agent.run_sync('Tell me about the pydantic/pydantic-ai repo.')
460+
print(result.output)
461+
"""
462+
The pydantic/pydantic-ai repo is a Python agent framework for building Generative AI applications.
463+
"""
464+
```
465+
466+
1. The [DeepWiki MCP server](https://docs.devin.ai/work-with-devin/deepwiki-mcp) does not require authorization.
467+
468+
_(This example is complete, it can be run "as is")_
469+
470+
With OpenAI, you must use their Responses API to access the MCP server tool:
471+
472+
```py {title="mcp_server_openai.py"}
473+
from pydantic_ai import Agent, MCPServerTool
474+
475+
agent = Agent(
476+
'openai-responses:gpt-5',
477+
builtin_tools=[
478+
MCPServerTool(
479+
id='deepwiki',
480+
url='https://mcp.deepwiki.com/mcp', # (1)
481+
)
482+
]
483+
)
484+
485+
result = agent.run_sync('Tell me about the pydantic/pydantic-ai repo.')
486+
print(result.output)
487+
"""
488+
The pydantic/pydantic-ai repo is a Python agent framework for building Generative AI applications.
489+
"""
490+
```
491+
492+
1. The [DeepWiki MCP server](https://docs.devin.ai/work-with-devin/deepwiki-mcp) does not require authorization.
493+
494+
_(This example is complete, it can be run "as is")_
495+
496+
### Configuration Options
497+
498+
The `MCPServerTool` supports several configuration parameters for custom MCP servers:
499+
500+
```py {title="mcp_server_configured_url.py"}
501+
import os
502+
503+
from pydantic_ai import Agent, MCPServerTool
504+
505+
agent = Agent(
506+
'openai-responses:gpt-5',
507+
builtin_tools=[
508+
MCPServerTool(
509+
id='github',
510+
url='https://api.githubcopilot.com/mcp/',
511+
authorization_token=os.getenv('GITHUB_ACCESS_TOKEN', 'mock-access-token'), # (1)
512+
allowed_tools=['search_repositories', 'list_commits'],
513+
description='GitHub MCP server',
514+
headers={'X-Custom-Header': 'custom-value'},
515+
)
516+
]
517+
)
518+
519+
result = agent.run_sync('Tell me about the pydantic/pydantic-ai repo.')
520+
print(result.output)
521+
"""
522+
The pydantic/pydantic-ai repo is a Python agent framework for building Generative AI applications.
523+
"""
524+
```
525+
526+
1. The [GitHub MCP server](https://github.com/github/github-mcp-server) requires an authorization token.
527+
528+
For OpenAI Responses, you can use a [connector](https://platform.openai.com/docs/guides/tools-connectors-mcp#connectors) by specifying a special `x-openai-connector:` URL:
529+
530+
_(This example is complete, it can be run "as is")_
531+
532+
```py {title="mcp_server_configured_connector_id.py"}
533+
import os
534+
535+
from pydantic_ai import Agent, MCPServerTool
536+
537+
agent = Agent(
538+
'openai-responses:gpt-5',
539+
builtin_tools=[
540+
MCPServerTool(
541+
id='google-calendar',
542+
url='x-openai-connector:connector_googlecalendar',
543+
authorization_token=os.getenv('GOOGLE_API_KEY', 'mock-api-key'), # (1)
544+
)
545+
]
546+
)
547+
548+
result = agent.run_sync('What do I have on my calendar today?')
549+
print(result.output)
550+
#> You're going to spend all day playing with Pydantic AI.
551+
```
552+
553+
1. OpenAI's Google Calendar connector requires an [authorization token](https://platform.openai.com/docs/guides/tools-connectors-mcp#authorizing-a-connector).
554+
555+
_(This example is complete, it can be run "as is")_
556+
557+
#### Provider Support
558+
559+
| Parameter | OpenAI | Anthropic |
560+
|-----------------------|--------|-----------|
561+
| `authorization_token` |||
562+
| `allowed_tools` |||
563+
| `description` |||
564+
| `headers` |||
565+
422566
## API Reference
423567

424568
For complete API documentation, see the [API Reference](api/builtin_tools.md).

docs/durable_execution/prefect.md

Lines changed: 12 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -255,20 +255,23 @@ from prefect import flow
255255
from pydantic_ai import Agent
256256
from pydantic_ai.durable_exec.prefect import PrefectAgent
257257

258-
agent = Agent(
259-
'openai:gpt-4o',
260-
name='daily_report_agent',
261-
instructions='Generate a daily summary report.',
262-
)
263-
264-
prefect_agent = PrefectAgent(agent)
265258

266259
@flow
267260
async def daily_report_flow(user_prompt: str):
268261
"""Generate a daily report using the agent."""
262+
agent = Agent( # (1)!
263+
'openai:gpt-4o',
264+
name='daily_report_agent',
265+
instructions='Generate a daily summary report.',
266+
)
267+
268+
prefect_agent = PrefectAgent(agent)
269+
269270
result = await prefect_agent.run(user_prompt)
270271
return result.output
271272

273+
274+
272275
# Serve the flow with a daily schedule
273276
if __name__ == '__main__':
274277
daily_report_flow.serve(
@@ -279,6 +282,8 @@ if __name__ == '__main__':
279282
)
280283
```
281284

285+
1. Each flow run executes in an isolated process, and all inputs and dependencies must be serializable. Because Agent instances cannot be serialized, instantiate the agent inside the flow rather than at the module level.
286+
282287
The `serve()` method accepts scheduling options:
283288

284289
- **`cron`**: Cron schedule string (e.g., `'0 9 * * *'` for daily at 9am)

0 commit comments

Comments
 (0)