Skip to content

Commit ae410f1

Browse files
committed
coverage
1 parent 8df053c commit ae410f1

File tree

10 files changed

+92
-106
lines changed

10 files changed

+92
-106
lines changed

docs/builtin-tools.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ making it ideal for queries that require up-to-date data.
3131
|----------|-----------|-------|
3232
| OpenAI Responses || Full feature support. To include search results on the [`BuiltinToolReturnPart`][pydantic_ai.messages.BuiltinToolReturnPart] that's available via [`ModelResponse.builtin_tool_calls`][pydantic_ai.messages.ModelResponse.builtin_tool_calls], enable the [`OpenAIResponsesModelSettings.openai_include_web_search_sources`][pydantic_ai.models.openai.OpenAIResponsesModelSettings.openai_include_web_search_sources] [model setting](agents.md#model-run-settings). |
3333
| Anthropic || Full feature support |
34-
| Google || No parameter support. No [`BuiltinToolCallPart`][pydantic_ai.messages.BuiltinToolCallPart] or [`BuiltinToolReturnPart`][pydantic_ai.messages.BuiltinToolReturnPart] is generated when streaming. Using built-in tools and user tools (including [output tools](output.md#tool-output)) at the same time is not supported; to use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
34+
| Google || No parameter support. No [`BuiltinToolCallPart`][pydantic_ai.messages.BuiltinToolCallPart] or [`BuiltinToolReturnPart`][pydantic_ai.messages.BuiltinToolReturnPart] is generated when streaming. Using built-in tools and function tools (including [output tools](output.md#tool-output)) at the same time is not supported; to use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
3535
| Groq || Limited parameter support. To use web search capabilities with Groq, you need to use the [compound models](https://console.groq.com/docs/compound). |
3636
| OpenAI Chat Completions || Not supported |
3737
| Bedrock || Not supported |
@@ -123,7 +123,7 @@ in a secure environment, making it perfect for computational tasks, data analysi
123123
| Provider | Supported | Notes |
124124
|----------|-----------|-------|
125125
| OpenAI || To include code execution output on the [`BuiltinToolReturnPart`][pydantic_ai.messages.BuiltinToolReturnPart] that's available via [`ModelResponse.builtin_tool_calls`][pydantic_ai.messages.ModelResponse.builtin_tool_calls], enable the [`OpenAIResponsesModelSettings.openai_include_code_execution_outputs`][pydantic_ai.models.openai.OpenAIResponsesModelSettings.openai_include_code_execution_outputs] [model setting](agents.md#model-run-settings). If the code execution generated images, like charts, they will be available on [`ModelResponse.images`][pydantic_ai.messages.ModelResponse.images] as [`BinaryImage`][pydantic_ai.messages.BinaryImage] objects. The generated image can also be used as [image output](output.md#image-output) for the agent run. |
126-
| Google || Using built-in tools and user tools (including [output tools](output.md#tool-output)) at the same time is not supported; to use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
126+
| Google || Using built-in tools and function tools (including [output tools](output.md#tool-output)) at the same time is not supported; to use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
127127
| Anthropic || |
128128
| Groq || |
129129
| Bedrock || |
@@ -315,7 +315,7 @@ allowing it to pull up-to-date information from the web.
315315

316316
| Provider | Supported | Notes |
317317
|----------|-----------|-------|
318-
| Google || No [`BuiltinToolCallPart`][pydantic_ai.messages.BuiltinToolCallPart] or [`BuiltinToolReturnPart`][pydantic_ai.messages.BuiltinToolReturnPart] is currently generated; please submit an issue if you need this. Using built-in tools and user tools (including [output tools](output.md#tool-output)) at the same time is not supported; to use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
318+
| Google || No [`BuiltinToolCallPart`][pydantic_ai.messages.BuiltinToolCallPart] or [`BuiltinToolReturnPart`][pydantic_ai.messages.BuiltinToolReturnPart] is currently generated; please submit an issue if you need this. Using built-in tools and function tools (including [output tools](output.md#tool-output)) at the same time is not supported; to use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
319319
| OpenAI || |
320320
| Anthropic || |
321321
| Groq || |

pydantic_ai_slim/pydantic_ai/_agent_graph.py

Lines changed: 58 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -567,63 +567,67 @@ async def _run_stream( # noqa: C901
567567
output_schema = ctx.deps.output_schema
568568

569569
async def _run_stream() -> AsyncIterator[_messages.HandleResponseEvent]: # noqa: C901
570-
try:
571-
if not self.model_response.parts:
572-
# we got an empty response.
573-
# this sometimes happens with anthropic (and perhaps other models)
574-
# when the model has already returned text along side tool calls
575-
if text_processor := output_schema.text_processor:
576-
# in this scenario, if text responses are allowed, we return text from the most recent model
577-
# response, if any
578-
for message in reversed(ctx.state.message_history):
579-
if isinstance(message, _messages.ModelResponse):
580-
text = ''
581-
for part in message.parts:
582-
if isinstance(part, _messages.TextPart):
583-
text += part.content
584-
elif isinstance(part, _messages.BuiltinToolCallPart):
585-
# Text parts before a built-in tool call are essentially thoughts,
586-
# not part of the final result output, so we reset the accumulated text
587-
text = '' # pragma: no cover
588-
if text:
570+
if not self.model_response.parts:
571+
# we got an empty response.
572+
# this sometimes happens with anthropic (and perhaps other models)
573+
# when the model has already returned text along side tool calls
574+
if text_processor := output_schema.text_processor: # pragma: no branch
575+
# in this scenario, if text responses are allowed, we return text from the most recent model
576+
# response, if any
577+
for message in reversed(ctx.state.message_history):
578+
if isinstance(message, _messages.ModelResponse):
579+
text = ''
580+
for part in message.parts:
581+
if isinstance(part, _messages.TextPart):
582+
text += part.content
583+
elif isinstance(part, _messages.BuiltinToolCallPart):
584+
# Text parts before a built-in tool call are essentially thoughts,
585+
# not part of the final result output, so we reset the accumulated text
586+
text = '' # pragma: no cover
587+
if text:
588+
try:
589589
self._next_node = await self._handle_text_response(ctx, text, text_processor)
590590
return
591+
except ToolRetryError:
592+
# If the text from the preview response was invalid, ignore it.
593+
pass
594+
595+
# Go back to the model request node with an empty request, which means we'll essentially
596+
# resubmit the most recent request that resulted in an empty response,
597+
# as the empty response and request will not create any items in the API payload,
598+
# in the hope the model will return a non-empty response this time.
599+
ctx.state.increment_retries(ctx.deps.max_result_retries, model_settings=ctx.deps.model_settings)
600+
run_context = build_run_context(ctx)
601+
instructions = await ctx.deps.get_instructions(run_context)
602+
self._next_node = ModelRequestNode[DepsT, NodeRunEndT](
603+
_messages.ModelRequest(parts=[], instructions=instructions)
604+
)
605+
return
606+
607+
text = ''
608+
tool_calls: list[_messages.ToolCallPart] = []
609+
files: list[_messages.BinaryContent] = []
610+
611+
for part in self.model_response.parts:
612+
if isinstance(part, _messages.TextPart):
613+
text += part.content
614+
elif isinstance(part, _messages.ToolCallPart):
615+
tool_calls.append(part)
616+
elif isinstance(part, _messages.FilePart):
617+
files.append(part.content)
618+
elif isinstance(part, _messages.BuiltinToolCallPart):
619+
# Text parts before a built-in tool call are essentially thoughts,
620+
# not part of the final result output, so we reset the accumulated text
621+
text = ''
622+
yield _messages.BuiltinToolCallEvent(part) # pyright: ignore[reportDeprecated]
623+
elif isinstance(part, _messages.BuiltinToolReturnPart):
624+
yield _messages.BuiltinToolResultEvent(part) # pyright: ignore[reportDeprecated]
625+
elif isinstance(part, _messages.ThinkingPart):
626+
pass
627+
else:
628+
assert_never(part)
591629

592-
# Go back to the model request node with an empty request, which means we'll essentially
593-
# resubmit the most recent request that resulted in an empty response,
594-
# as the empty response and request will not create any items in the API payload,
595-
# in the hope the model will return a non-empty response this time.
596-
ctx.state.increment_retries(ctx.deps.max_result_retries, model_settings=ctx.deps.model_settings)
597-
run_context = build_run_context(ctx)
598-
instructions = await ctx.deps.get_instructions(run_context)
599-
self._next_node = ModelRequestNode[DepsT, NodeRunEndT](
600-
_messages.ModelRequest(parts=[], instructions=instructions)
601-
)
602-
return
603-
604-
text = ''
605-
tool_calls: list[_messages.ToolCallPart] = []
606-
files: list[_messages.BinaryContent] = []
607-
608-
for part in self.model_response.parts:
609-
if isinstance(part, _messages.TextPart):
610-
text += part.content
611-
elif isinstance(part, _messages.ToolCallPart):
612-
tool_calls.append(part)
613-
elif isinstance(part, _messages.FilePart):
614-
files.append(part.content)
615-
elif isinstance(part, _messages.BuiltinToolCallPart):
616-
# Text parts before a built-in tool call are essentially thoughts,
617-
# not part of the final result output, so we reset the accumulated text
618-
text = ''
619-
yield _messages.BuiltinToolCallEvent(part) # pyright: ignore[reportDeprecated]
620-
elif isinstance(part, _messages.BuiltinToolReturnPart):
621-
yield _messages.BuiltinToolResultEvent(part) # pyright: ignore[reportDeprecated]
622-
elif isinstance(part, _messages.ThinkingPart):
623-
pass
624-
else:
625-
assert_never(part)
626-
630+
try:
627631
# At the moment, we prioritize at least executing tool calls if they are present.
628632
# In the future, we'd consider making this configurable at the agent or run level.
629633
# This accounts for cases like anthropic returns that might contain a text response

pydantic_ai_slim/pydantic_ai/_output.py

Lines changed: 2 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
from .toolsets.abstract import AbstractToolset, ToolsetTool
3636

3737
if TYPE_CHECKING:
38-
from .profiles import ModelProfile
38+
pass
3939

4040
T = TypeVar('T')
4141
"""An invariant TypeVar."""
@@ -384,12 +384,6 @@ def _build_processor(
384384

385385
return UnionOutputProcessor(outputs=outputs, strict=strict, name=name, description=description)
386386

387-
def raise_if_unsupported(self, profile: ModelProfile) -> None:
388-
"""Raise an error if the mode is not supported by this model."""
389-
# TODO (DouweM): Remove method?
390-
if self.allows_image and not profile.supports_image_output:
391-
raise UserError('Image output is not supported by this model.')
392-
393387

394388
@dataclass(init=False)
395389
class OutputSchemaWithoutMode(BaseOutputSchema[OutputDataT]):
@@ -439,10 +433,6 @@ def __init__(
439433
def mode(self) -> OutputMode | None:
440434
return 'text'
441435

442-
def raise_if_unsupported(self, profile: ModelProfile) -> None:
443-
"""Raise an error if the mode is not supported by this model."""
444-
super().raise_if_unsupported(profile)
445-
446436

447437
class ImageOutputSchema(OutputSchema[OutputDataT]):
448438
def __init__(self, *, allows_deferred_tools: bool):
@@ -452,11 +442,6 @@ def __init__(self, *, allows_deferred_tools: bool):
452442
def mode(self) -> OutputMode | None:
453443
return 'image'
454444

455-
def raise_if_unsupported(self, profile: ModelProfile) -> None:
456-
"""Raise an error if the mode is not supported by this model."""
457-
# This already raises if image output is not supported by this model.
458-
super().raise_if_unsupported(profile)
459-
460445

461446
@dataclass(init=False)
462447
class StructuredTextOutputSchema(OutputSchema[OutputDataT], ABC):
@@ -479,11 +464,6 @@ class NativeOutputSchema(StructuredTextOutputSchema[OutputDataT]):
479464
def mode(self) -> OutputMode | None:
480465
return 'native'
481466

482-
def raise_if_unsupported(self, profile: ModelProfile) -> None:
483-
"""Raise an error if the mode is not supported by this model."""
484-
if not profile.supports_json_schema_output:
485-
raise UserError('Native structured output is not supported by this model.')
486-
487467

488468
@dataclass(init=False)
489469
class PromptedOutputSchema(StructuredTextOutputSchema[OutputDataT]):
@@ -522,11 +502,7 @@ def build_instructions(cls, template: str, object_def: OutputObjectDefinition) -
522502

523503
return template.format(schema=json.dumps(schema))
524504

525-
def raise_if_unsupported(self, profile: ModelProfile) -> None:
526-
"""Raise an error if the mode is not supported by this model."""
527-
super().raise_if_unsupported(profile)
528-
529-
def instructions(self, default_template: str) -> str:
505+
def instructions(self, default_template: str) -> str: # pragma: no cover
530506
"""Get instructions to tell model to output JSON matching the schema."""
531507
template = self.template or default_template
532508
object_def = self.object_def
@@ -555,12 +531,6 @@ def __init__(
555531
def mode(self) -> OutputMode | None:
556532
return 'tool'
557533

558-
def raise_if_unsupported(self, profile: ModelProfile) -> None:
559-
"""Raise an error if the mode is not supported by this model."""
560-
super().raise_if_unsupported(profile)
561-
if not profile.supports_tools:
562-
raise UserError('Tool output is not supported by this model.')
563-
564534

565535
class BaseOutputProcessor(ABC, Generic[OutputDataT]):
566536
@abstractmethod

pydantic_ai_slim/pydantic_ai/models/__init__.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -437,10 +437,11 @@ def prepare_request(
437437

438438
if model_request_parameters.output_mode in ('native', 'prompted'):
439439
if not model_request_parameters.output_object:
440-
raise UserError('An `output_object` is required when using `NativeOutput` or `PromptedOutput`.')
440+
raise UserError( # pragma: no cover
441+
'An `output_object` is required when using `NativeOutput` or `PromptedOutput`.'
442+
)
441443

442444
if model_request_parameters.output_mode == 'native' and not self.profile.supports_json_schema_output:
443-
# TODO (DouweM): Call `NativeOutputSchema.raise_if_unsupported(self.profile)`?
444445
raise UserError('Native structured output is not supported by this model.')
445446

446447
if model_request_parameters.output_tools:
@@ -453,7 +454,7 @@ def prepare_request(
453454
else:
454455
if model_request_parameters.output_mode == 'tool':
455456
if not model_request_parameters.output_tools and not model_request_parameters.function_tools:
456-
raise UserError('An `output_tools` list is required when using `ToolOutput`.')
457+
raise UserError('An `output_tools` list is required when using `ToolOutput`.') # pragma: no cover
457458

458459
if not self.profile.supports_tools:
459460
raise UserError('Tool output is not supported by this model.')
@@ -556,7 +557,7 @@ def _get_instructions(
556557
output_instructions = PromptedOutputSchema.build_instructions(
557558
model_request_parameters.prompted_output_template, model_request_parameters.output_object
558559
)
559-
if instructions is not None:
560+
if instructions:
560561
instructions = '\n\n'.join([instructions, output_instructions])
561562
else:
562563
instructions = output_instructions

pydantic_ai_slim/pydantic_ai/models/anthropic.py

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -247,17 +247,14 @@ def prepare_request(
247247
settings = merge_model_settings(self.settings, model_settings)
248248
if (
249249
model_request_parameters.output_tools
250-
and (
251-
model_request_parameters.output_mode is None
252-
or (model_request_parameters.output_mode == 'tool' and not model_request_parameters.allow_text_output)
253-
)
254250
and settings
255251
and (thinking := settings.get('anthropic_thinking'))
256252
and thinking.get('type') == 'enabled'
257253
):
258254
if model_request_parameters.output_mode is None:
259255
model_request_parameters = replace(model_request_parameters, output_mode='prompted')
260-
else:
256+
elif model_request_parameters.output_mode == 'tool' and not model_request_parameters.allow_text_output:
257+
# This would result in `tool_choice=required`, which Anthropic does not support with thinking.
261258
raise UserError(
262259
'Anthropic does not support thinking and output tools at the same time. Use `output_type=PromptedOutput(...)` instead.'
263260
)

pydantic_ai_slim/pydantic_ai/models/function.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -135,6 +135,7 @@ async def request(
135135
allow_text_output=model_request_parameters.allow_text_output,
136136
output_tools=model_request_parameters.output_tools,
137137
model_settings=model_settings,
138+
instructions=self._get_instructions(messages, model_request_parameters),
138139
)
139140

140141
assert self.function is not None, 'FunctionModel must receive a `function` to support non-streamed requests'
@@ -168,6 +169,7 @@ async def request_stream(
168169
allow_text_output=model_request_parameters.allow_text_output,
169170
output_tools=model_request_parameters.output_tools,
170171
model_settings=model_settings,
172+
instructions=self._get_instructions(messages, model_request_parameters),
171173
)
172174

173175
assert self.stream_function is not None, (
@@ -216,6 +218,8 @@ class AgentInfo:
216218
"""The tools that can called to produce the final output of the run."""
217219
model_settings: ModelSettings | None
218220
"""The model settings passed to the run call."""
221+
instructions: str | None
222+
"""The instructions passed to model."""
219223

220224

221225
@dataclass

0 commit comments

Comments
 (0)