You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/agents.md
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -904,14 +904,15 @@ You should use:
904
904
905
905
In general, we recommend using `instructions` instead of `system_prompt` unless you have a specific reason to use `system_prompt`.
906
906
907
-
Instructions, like system prompts, fall into two categories:
907
+
Instructions, like system prompts, can be specified at different times:
908
908
909
909
1.**Static instructions**: These are known when writing the code and can be defined via the `instructions` parameter of the [`Agent` constructor][pydantic_ai.Agent.__init__].
910
910
2.**Dynamic instructions**: These rely on context that is only available at runtime and should be defined using functions decorated with [`@agent.instructions`][pydantic_ai.Agent.instructions]. Unlike dynamic system prompts, which may be reused when `message_history` is present, dynamic instructions are always reevaluated.
911
+
3.**Runtime instructions*: These are additional instructions for a specific run that can be passed to one of the [run methods](#running-agents) using the `instructions` argument.
911
912
912
-
Both static and dynamic instructions can be added to a single agent, and they are appended in the order they are defined at runtime.
913
+
All three types of instructions can be added to a single agent, and they are appended in the order they are defined at runtime.
913
914
914
-
Here's an example using both types of instructions:
915
+
Here's an example using a static instruction as well as dynamic instructions:
Copy file name to clipboardExpand all lines: docs/builtin-tools.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,7 @@ making it ideal for queries that require up-to-date data.
31
31
|----------|-----------|-------|
32
32
| OpenAI Responses | ✅ | Full feature support. To include search results on the [`BuiltinToolReturnPart`][pydantic_ai.messages.BuiltinToolReturnPart] that's available via [`ModelResponse.builtin_tool_calls`][pydantic_ai.messages.ModelResponse.builtin_tool_calls], enable the [`OpenAIResponsesModelSettings.openai_include_web_search_sources`][pydantic_ai.models.openai.OpenAIResponsesModelSettings.openai_include_web_search_sources][model setting](agents.md#model-run-settings). |
33
33
| Anthropic | ✅ | Full feature support |
34
-
| Google | ✅ | No parameter support. No [`BuiltinToolCallPart`][pydantic_ai.messages.BuiltinToolCallPart] or [`BuiltinToolReturnPart`][pydantic_ai.messages.BuiltinToolReturnPart] is generated when streaming. Using built-in tools and user tools (including [output tools](output.md#tool-output)) at the same time is not supported; to use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
34
+
| Google | ✅ | No parameter support. No [`BuiltinToolCallPart`][pydantic_ai.messages.BuiltinToolCallPart] or [`BuiltinToolReturnPart`][pydantic_ai.messages.BuiltinToolReturnPart] is generated when streaming. Using built-in tools and function tools (including [output tools](output.md#tool-output)) at the same time is not supported; to use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
35
35
| Groq | ✅ | Limited parameter support. To use web search capabilities with Groq, you need to use the [compound models](https://console.groq.com/docs/compound). |
36
36
| OpenAI Chat Completions | ❌ | Not supported |
37
37
| Bedrock | ❌ | Not supported |
@@ -123,7 +123,7 @@ in a secure environment, making it perfect for computational tasks, data analysi
123
123
| Provider | Supported | Notes |
124
124
|----------|-----------|-------|
125
125
| OpenAI | ✅ | To include code execution output on the [`BuiltinToolReturnPart`][pydantic_ai.messages.BuiltinToolReturnPart] that's available via [`ModelResponse.builtin_tool_calls`][pydantic_ai.messages.ModelResponse.builtin_tool_calls], enable the [`OpenAIResponsesModelSettings.openai_include_code_execution_outputs`][pydantic_ai.models.openai.OpenAIResponsesModelSettings.openai_include_code_execution_outputs][model setting](agents.md#model-run-settings). If the code execution generated images, like charts, they will be available on [`ModelResponse.images`][pydantic_ai.messages.ModelResponse.images] as [`BinaryImage`][pydantic_ai.messages.BinaryImage] objects. The generated image can also be used as [image output](output.md#image-output) for the agent run. |
126
-
| Google | ✅ | Using built-in tools and user tools (including [output tools](output.md#tool-output)) at the same time is not supported; to use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
126
+
| Google | ✅ | Using built-in tools and function tools (including [output tools](output.md#tool-output)) at the same time is not supported; to use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
127
127
| Anthropic | ✅ ||
128
128
| Groq | ❌ ||
129
129
| Bedrock | ❌ ||
@@ -315,7 +315,7 @@ allowing it to pull up-to-date information from the web.
315
315
316
316
| Provider | Supported | Notes |
317
317
|----------|-----------|-------|
318
-
| Google | ✅ | No [`BuiltinToolCallPart`][pydantic_ai.messages.BuiltinToolCallPart] or [`BuiltinToolReturnPart`][pydantic_ai.messages.BuiltinToolReturnPart] is currently generated; please submit an issue if you need this. Using built-in tools and user tools (including [output tools](output.md#tool-output)) at the same time is not supported; to use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
318
+
| Google | ✅ | No [`BuiltinToolCallPart`][pydantic_ai.messages.BuiltinToolCallPart] or [`BuiltinToolReturnPart`][pydantic_ai.messages.BuiltinToolReturnPart] is currently generated; please submit an issue if you need this. Using built-in tools and function tools (including [output tools](output.md#tool-output)) at the same time is not supported; to use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
Copy file name to clipboardExpand all lines: docs/durable_execution/temporal.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -172,7 +172,7 @@ As workflows and activities run in separate processes, any values passed between
172
172
173
173
To account for these limitations, tool functions and the [event stream handler](#streaming) running inside activities receive a limited version of the agent's [`RunContext`][pydantic_ai.tools.RunContext], and it's your responsibility to make sure that the [dependencies](../dependencies.md) object provided to [`TemporalAgent.run()`][pydantic_ai.durable_exec.temporal.TemporalAgent.run] can be serialized using Pydantic.
174
174
175
-
Specifically, only the `deps`, `retries`, `tool_call_id`, `tool_name`, `tool_call_approved`, `retry`, `max_retries`and `run_step` fields are available by default, and trying to access `model`, `usage`, `prompt`, `messages`, or `tracer` will raise an error.
175
+
Specifically, only the `deps`, `retries`, `tool_call_id`, `tool_name`, `tool_call_approved`, `retry`, `max_retries`, `run_step`and `partial_output` fields are available by default, and trying to access `model`, `usage`, `prompt`, `messages`, or `tracer` will raise an error.
176
176
If you need one or more of these attributes to be available inside activities, you can create a [`TemporalRunContext`][pydantic_ai.durable_exec.temporal.TemporalRunContext] subclass with custom `serialize_run_context` and `deserialize_run_context` class methods and pass it to [`TemporalAgent`][pydantic_ai.durable_exec.temporal.TemporalAgent] as `run_context_type`.
Copy file name to clipboardExpand all lines: docs/models/overview.md
+6Lines changed: 6 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -86,6 +86,12 @@ You can use [`FallbackModel`][pydantic_ai.models.fallback.FallbackModel] to atte
86
86
in sequence until one successfully returns a result. Under the hood, Pydantic AI automatically switches
87
87
from one model to the next if the current model returns a 4xx or 5xx status code.
88
88
89
+
!!! note
90
+
91
+
The provider SDKs on which Models are based (like OpenAI, Anthropic, etc.) often have built-in retry logic that can delay the `FallbackModel` from activating.
92
+
93
+
When using `FallbackModel`, it's recommended to disable provider SDK retries to ensure immediate fallback, for example by setting `max_retries=0` on a [custom OpenAI client](openai.md#custom-openai-client).
94
+
89
95
In the following example, the agent first makes a request to the OpenAI model (which fails due to an invalid API key),
Copy file name to clipboardExpand all lines: docs/output.md
+34Lines changed: 34 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -470,6 +470,40 @@ print(result.output)
470
470
471
471
_(This example is complete, it can be run "as is")_
472
472
473
+
#### Handling partial output in output validators {#partial-output}
474
+
475
+
You can use the `partial_output` field on `RunContext` to handle validation differently for partial outputs during streaming (e.g. skip validation altogether).
asyncwith agent.run_stream('Write a long story about a cat') as result:
494
+
asyncfor message in result.stream_text():
495
+
print(message)
496
+
#> Once upon a
497
+
#> Once upon a time, there was
498
+
#> Once upon a time, there was a curious cat
499
+
#> Once upon a time, there was a curious cat named Whiskers who
500
+
#> Once upon a time, there was a curious cat named Whiskers who loved to explore
501
+
#> Once upon a time, there was a curious cat named Whiskers who loved to explore the world around
502
+
#> Once upon a time, there was a curious cat named Whiskers who loved to explore the world around him...
503
+
```
504
+
505
+
_(This example is complete, it can be run "as is" — you'll need to add `asyncio.run(main())` to run `main`)_
506
+
473
507
## Image output
474
508
475
509
Some models can generate images as part of their response, for example those that support the [Image Generation built-in tool](builtin-tools.md#image-generation-tool) and OpenAI models using the [Code Execution built-in tool](builtin-tools.md#code-execution-tool) when told to generate a chart.
Although Bedrock Converse doesn't provide a unified API to enable thinking, you can still use [`BedrockModelSettings.bedrock_additional_model_requests_fields`][pydantic_ai.models.bedrock.BedrockModelSettings.bedrock_additional_model_requests_fields][model setting](agents.md#model-run-settings) to pass provider-specific configuration:
0 commit comments