Skip to content

Commit a564ac6

Browse files
committed
Merge remote-tracking branch 'origin/main' into investigate-3207-multi-agent-instructions
# Conflicts: # pydantic_ai_slim/pydantic_ai/_agent_graph.py
2 parents 5f87b58 + f5a5b73 commit a564ac6

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

55 files changed

+1643
-535
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ We built Pydantic AI with one simple aim: to bring that FastAPI feeling to GenAI
3939
[Pydantic Validation](https://docs.pydantic.dev/latest/) is the validation layer of the OpenAI SDK, the Google ADK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more. _Why use the derivative when you can go straight to the source?_ :smiley:
4040

4141
2. **Model-agnostic**:
42-
Supports virtually every [model](https://ai.pydantic.dev/models/overview) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius. If your favorite model or provider is not listed, you can easily implement a [custom model](https://ai.pydantic.dev/models/overview#custom-models).
42+
Supports virtually every [model](https://ai.pydantic.dev/models/overview) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius, OVHcloud. If your favorite model or provider is not listed, you can easily implement a [custom model](https://ai.pydantic.dev/models/overview#custom-models).
4343

4444
3. **Seamless Observability**:
4545
Tightly [integrates](https://ai.pydantic.dev/logfire) with [Pydantic Logfire](https://pydantic.dev/logfire), our general-purpose OpenTelemetry observability platform, for real-time debugging, evals-based performance monitoring, and behavior, tracing, and cost tracking. If you already have an observability platform that supports OTel, you can [use that too](https://ai.pydantic.dev/logfire#alternative-observability-backends).

docs/api/providers.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,3 +43,5 @@
4343
::: pydantic_ai.providers.litellm.LiteLLMProvider
4444

4545
::: pydantic_ai.providers.nebius.NebiusProvider
46+
47+
::: pydantic_ai.providers.ovhcloud.OVHcloudProvider

docs/durable_execution/prefect.md

Lines changed: 12 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -255,20 +255,23 @@ from prefect import flow
255255
from pydantic_ai import Agent
256256
from pydantic_ai.durable_exec.prefect import PrefectAgent
257257

258-
agent = Agent(
259-
'openai:gpt-4o',
260-
name='daily_report_agent',
261-
instructions='Generate a daily summary report.',
262-
)
263-
264-
prefect_agent = PrefectAgent(agent)
265258

266259
@flow
267260
async def daily_report_flow(user_prompt: str):
268261
"""Generate a daily report using the agent."""
262+
agent = Agent( # (1)!
263+
'openai:gpt-4o',
264+
name='daily_report_agent',
265+
instructions='Generate a daily summary report.',
266+
)
267+
268+
prefect_agent = PrefectAgent(agent)
269+
269270
result = await prefect_agent.run(user_prompt)
270271
return result.output
271272

273+
274+
272275
# Serve the flow with a daily schedule
273276
if __name__ == '__main__':
274277
daily_report_flow.serve(
@@ -279,6 +282,8 @@ if __name__ == '__main__':
279282
)
280283
```
281284

285+
1. Each flow run executes in an isolated process, and all inputs and dependencies must be serializable. Because Agent instances cannot be serialized, instantiate the agent inside the flow rather than at the module level.
286+
282287
The `serve()` method accepts scheduling options:
283288

284289
- **`cron`**: Cron schedule string (e.g., `'0 9 * * *'` for daily at 9am)

docs/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ We built Pydantic AI with one simple aim: to bring that FastAPI feeling to GenAI
1414
[Pydantic Validation](https://docs.pydantic.dev/latest/) is the validation layer of the OpenAI SDK, the Google ADK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more. _Why use the derivative when you can go straight to the source?_ :smiley:
1515

1616
2. **Model-agnostic**:
17-
Supports virtually every [model](models/overview.md) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius. If your favorite model or provider is not listed, you can easily implement a [custom model](models/overview.md#custom-models).
17+
Supports virtually every [model](models/overview.md) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius, OVHcloud. If your favorite model or provider is not listed, you can easily implement a [custom model](models/overview.md#custom-models).
1818

1919
3. **Seamless Observability**:
2020
Tightly [integrates](logfire.md) with [Pydantic Logfire](https://pydantic.dev/logfire), our general-purpose OpenTelemetry observability platform, for real-time debugging, evals-based performance monitoring, and behavior, tracing, and cost tracking. If you already have an observability platform that supports OTel, you can [use that too](logfire.md#alternative-observability-backends).

docs/mcp/client.md

Lines changed: 10 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -58,15 +58,13 @@ server = MCPServerStreamableHTTP('http://localhost:8000/mcp') # (1)!
5858
agent = Agent('openai:gpt-4o', toolsets=[server]) # (2)!
5959

6060
async def main():
61-
async with agent: # (3)!
62-
result = await agent.run('What is 7 plus 5?')
61+
result = await agent.run('What is 7 plus 5?')
6362
print(result.output)
6463
#> The answer is 12.
6564
```
6665

6766
1. Define the MCP server with the URL used to connect.
6867
2. Create an agent with the MCP server attached.
69-
3. Create a client session to connect to the server.
7068

7169
_(This example is complete, it can be run "as is" — you'll need to add `asyncio.run(main())` to run `main`)_
7270

@@ -122,15 +120,13 @@ agent = Agent('openai:gpt-4o', toolsets=[server]) # (2)!
122120

123121

124122
async def main():
125-
async with agent: # (3)!
126-
result = await agent.run('What is 7 plus 5?')
123+
result = await agent.run('What is 7 plus 5?')
127124
print(result.output)
128125
#> The answer is 12.
129126
```
130127

131128
1. Define the MCP server with the URL used to connect.
132129
2. Create an agent with the MCP server attached.
133-
3. Create a client session to connect to the server.
134130

135131
_(This example is complete, it can be run "as is" — you'll need to add `asyncio.run(main())` to run `main`)_
136132

@@ -151,8 +147,7 @@ agent = Agent('openai:gpt-4o', toolsets=[server])
151147

152148

153149
async def main():
154-
async with agent:
155-
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
150+
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
156151
print(result.output)
157152
#> There are 9,208 days between January 1, 2000, and March 18, 2025.
158153
```
@@ -205,8 +200,7 @@ servers = load_mcp_servers('mcp_config.json')
205200
agent = Agent('openai:gpt-5', toolsets=servers)
206201

207202
async def main():
208-
async with agent:
209-
result = await agent.run('What is 7 plus 5?')
203+
result = await agent.run('What is 7 plus 5?')
210204
print(result.output)
211205
```
212206

@@ -247,8 +241,7 @@ agent = Agent(
247241

248242

249243
async def main():
250-
async with agent:
251-
result = await agent.run('Echo with deps set to 42', deps=42)
244+
result = await agent.run('Echo with deps set to 42', deps=42)
252245
print(result.output)
253246
#> {"echo_deps":{"echo":"This is an echo message","deps":42}}
254247
```
@@ -356,8 +349,7 @@ server = MCPServerSSE(
356349
agent = Agent('openai:gpt-4o', toolsets=[server])
357350

358351
async def main():
359-
async with agent:
360-
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
352+
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
361353
print(result.output)
362354
#> There are 9,208 days between January 1, 2000, and March 18, 2025.
363355
```
@@ -454,9 +446,8 @@ agent = Agent('openai:gpt-4o', toolsets=[server])
454446

455447

456448
async def main():
457-
async with agent:
458-
agent.set_mcp_sampling_model()
459-
result = await agent.run('Create an image of a robot in a punk style.')
449+
agent.set_mcp_sampling_model()
450+
result = await agent.run('Create an image of a robot in a punk style.')
460451
print(result.output)
461452
#> Image file written to robot_punk.svg.
462453
```
@@ -598,9 +589,8 @@ agent = Agent('openai:gpt-4o', toolsets=[restaurant_server])
598589

599590
async def main():
600591
"""Run the agent to book a restaurant table."""
601-
async with agent:
602-
result = await agent.run('Book me a table')
603-
print(f'\nResult: {result.output}')
592+
result = await agent.run('Book me a table')
593+
print(f'\nResult: {result.output}')
604594

605595

606596
if __name__ == '__main__':

docs/models/cohere.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ You can then use `CohereModel` by name:
2727
```python
2828
from pydantic_ai import Agent
2929

30-
agent = Agent('cohere:command')
30+
agent = Agent('cohere:command-r7b-12-2024')
3131
...
3232
```
3333

@@ -37,7 +37,7 @@ Or initialise the model directly with just the model name:
3737
from pydantic_ai import Agent
3838
from pydantic_ai.models.cohere import CohereModel
3939

40-
model = CohereModel('command')
40+
model = CohereModel('command-r7b-12-2024')
4141
agent = Agent(model)
4242
...
4343
```
@@ -51,7 +51,7 @@ from pydantic_ai import Agent
5151
from pydantic_ai.models.cohere import CohereModel
5252
from pydantic_ai.providers.cohere import CohereProvider
5353

54-
model = CohereModel('command', provider=CohereProvider(api_key='your-api-key'))
54+
model = CohereModel('command-r7b-12-2024', provider=CohereProvider(api_key='your-api-key'))
5555
agent = Agent(model)
5656
...
5757
```
@@ -67,7 +67,7 @@ from pydantic_ai.providers.cohere import CohereProvider
6767

6868
custom_http_client = AsyncClient(timeout=30)
6969
model = CohereModel(
70-
'command',
70+
'command-r7b-12-2024',
7171
provider=CohereProvider(api_key='your-api-key', http_client=custom_http_client),
7272
)
7373
agent = Agent(model)

0 commit comments

Comments
 (0)