Skip to content

Commit c63e2bf

Browse files
authored
Merge branch 'main' into qian/dbos-agent
2 parents 95fa7e2 + 7e8ebec commit c63e2bf

24 files changed

+3874
-1618
lines changed

docs/builtin-tools.md

Lines changed: 27 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
# Builtin Tools
22

3-
Builtin tools are native tools provided by LLM providers that can be used to enhance your agent's capabilities. Unlike [common tools](common-tools.md), which are custom implementations that PydanticAI executes, builtin tools are executed directly by the model provider.
3+
Builtin tools are native tools provided by LLM providers that can be used to enhance your agent's capabilities. Unlike [common tools](common-tools.md), which are custom implementations that Pydantic AI executes, builtin tools are executed directly by the model provider.
44

55
## Overview
66

7-
PydanticAI supports the following builtin tools:
7+
Pydantic AI supports the following builtin tools:
88

99
- **[`WebSearchTool`][pydantic_ai.builtin_tools.WebSearchTool]**: Allows agents to search the web
1010
- **[`CodeExecutionTool`][pydantic_ai.builtin_tools.CodeExecutionTool]**: Enables agents to execute code in a secure environment
@@ -13,7 +13,9 @@ PydanticAI supports the following builtin tools:
1313
These tools are passed to the agent via the `builtin_tools` parameter and are executed by the model provider's infrastructure.
1414

1515
!!! warning "Provider Support"
16-
Not all model providers support builtin tools. If you use a builtin tool with an unsupported provider, PydanticAI will raise a [`UserError`][pydantic_ai.exceptions.UserError] when you try to run the agent.
16+
Not all model providers support builtin tools. If you use a builtin tool with an unsupported provider, Pydantic AI will raise a [`UserError`][pydantic_ai.exceptions.UserError] when you try to run the agent.
17+
18+
If a provider supports a built-in tool that is not currently supported by Pydantic AI, please file an issue.
1719

1820
## Web Search Tool
1921

@@ -26,16 +28,13 @@ making it ideal for queries that require up-to-date data.
2628
|----------|-----------|-------|
2729
| OpenAI || Full feature support |
2830
| Anthropic || Full feature support |
29-
| Groq || Limited parameter support |
30-
| Google || No parameter support |
31+
| Groq || Limited parameter support. To use web search capabilities with Groq, you need to use the [compound models](https://console.groq.com/docs/compound). |
32+
| Google || No parameter support. Google does not support using built-in tools and user tools (including [output tools](output.md#tool-output)) at the same time. To use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
3133
| Bedrock || Not supported |
3234
| Mistral || Not supported |
3335
| Cohere || Not supported |
3436
| HuggingFace || Not supported |
3537

36-
!!! note "Groq Support"
37-
To use web search capabilities with Groq, you need to use the [compound models](https://console.groq.com/docs/compound).
38-
3938
### Usage
4039

4140
```py title="web_search_basic.py"
@@ -97,16 +96,16 @@ in a secure environment, making it perfect for computational tasks, data analysi
9796

9897
### Provider Support
9998

100-
| Provider | Supported |
101-
|----------|-----------|
102-
| OpenAI ||
103-
| Anthropic ||
104-
| Google ||
105-
| Groq ||
106-
| Bedrock ||
107-
| Mistral ||
108-
| Cohere ||
109-
| HuggingFace ||
99+
| Provider | Supported | Notes |
100+
|----------|-----------|-------|
101+
| OpenAI || |
102+
| Anthropic || Google does not support using built-in tools and user tools (including [output tools](output.md#tool-output)) at the same time. To use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
103+
| Google || |
104+
| Groq || |
105+
| Bedrock || |
106+
| Mistral || |
107+
| Cohere || |
108+
| HuggingFace || |
110109

111110
### Usage
112111

@@ -126,16 +125,16 @@ allowing it to pull up-to-date information from the web.
126125

127126
### Provider Support
128127

129-
| Provider | Supported |
130-
|----------|-----------|
131-
| Google ||
132-
| OpenAI ||
133-
| Anthropic ||
134-
| Groq ||
135-
| Bedrock ||
136-
| Mistral ||
137-
| Cohere ||
138-
| HuggingFace ||
128+
| Provider | Supported | Notes |
129+
|----------|-----------|-------|
130+
| Google || Google does not support using built-in tools and user tools (including [output tools](output.md#tool-output)) at the same time. To use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
131+
| OpenAI || |
132+
| Anthropic || |
133+
| Groq || |
134+
| Bedrock || |
135+
| Mistral || |
136+
| Cohere || |
137+
| HuggingFace || |
139138

140139
### Usage
141140

pydantic_ai_slim/pydantic_ai/_parts_manager.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -154,6 +154,7 @@ def handle_thinking_delta(
154154
*,
155155
vendor_part_id: Hashable | None,
156156
content: str | None = None,
157+
id: str | None = None,
157158
signature: str | None = None,
158159
) -> ModelResponseStreamEvent:
159160
"""Handle incoming thinking content, creating or updating a ThinkingPart in the manager as appropriate.
@@ -167,6 +168,7 @@ def handle_thinking_delta(
167168
of thinking. If None, a new part will be created unless the latest part is already
168169
a ThinkingPart.
169170
content: The thinking content to append to the appropriate ThinkingPart.
171+
id: An optional id for the thinking part.
170172
signature: An optional signature for the thinking content.
171173
172174
Returns:
@@ -197,7 +199,7 @@ def handle_thinking_delta(
197199
if content is not None:
198200
# There is no existing thinking part that should be updated, so create a new one
199201
new_part_index = len(self._parts)
200-
part = ThinkingPart(content=content, signature=signature)
202+
part = ThinkingPart(content=content, id=id, signature=signature)
201203
if vendor_part_id is not None: # pragma: no branch
202204
self._vendor_id_to_part_index[vendor_part_id] = new_part_index
203205
self._parts.append(part)

pydantic_ai_slim/pydantic_ai/models/anthropic.py

Lines changed: 4 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -536,20 +536,15 @@ def _map_tool_definition(f: ToolDefinition) -> BetaToolParam:
536536
}
537537

538538

539-
def _map_usage(message: BetaMessage | BetaRawMessageStreamEvent) -> usage.RequestUsage:
539+
def _map_usage(message: BetaMessage | BetaRawMessageStartEvent | BetaRawMessageDeltaEvent) -> usage.RequestUsage:
540540
if isinstance(message, BetaMessage):
541541
response_usage = message.usage
542542
elif isinstance(message, BetaRawMessageStartEvent):
543543
response_usage = message.message.usage
544544
elif isinstance(message, BetaRawMessageDeltaEvent):
545545
response_usage = message.usage
546546
else:
547-
# No usage information provided in:
548-
# - RawMessageStopEvent
549-
# - RawContentBlockStartEvent
550-
# - RawContentBlockDeltaEvent
551-
# - RawContentBlockStopEvent
552-
return usage.RequestUsage()
547+
assert_never(message)
553548

554549
# Store all integer-typed usage values in the details, except 'output_tokens' which is represented exactly by
555550
# `response_tokens`
@@ -586,10 +581,8 @@ async def _get_event_iterator(self) -> AsyncIterator[ModelResponseStreamEvent]:
586581
current_block: BetaContentBlock | None = None
587582

588583
async for event in self._response:
589-
self._usage += _map_usage(event)
590-
591584
if isinstance(event, BetaRawMessageStartEvent):
592-
pass
585+
self._usage = _map_usage(event)
593586

594587
elif isinstance(event, BetaRawContentBlockStartEvent):
595588
current_block = event.content_block
@@ -652,7 +645,7 @@ async def _get_event_iterator(self) -> AsyncIterator[ModelResponseStreamEvent]:
652645
pass
653646

654647
elif isinstance(event, BetaRawMessageDeltaEvent):
655-
pass
648+
self._usage = _map_usage(event)
656649

657650
elif isinstance(event, BetaRawContentBlockStopEvent | BetaRawMessageStopEvent): # pragma: no branch
658651
current_block = None

pydantic_ai_slim/pydantic_ai/models/gemini.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -211,7 +211,9 @@ async def _make_request(
211211
generation_config = _settings_to_generation_config(model_settings)
212212
if model_request_parameters.output_mode == 'native':
213213
if tools:
214-
raise UserError('Gemini does not support structured output and tools at the same time.')
214+
raise UserError(
215+
'Gemini does not support `NativeOutput` and tools at the same time. Use `output_type=ToolOutput(...)` instead.'
216+
)
215217

216218
generation_config['response_mime_type'] = 'application/json'
217219

pydantic_ai_slim/pydantic_ai/models/google.py

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -264,6 +264,14 @@ async def request_stream(
264264
yield await self._process_streamed_response(response, model_request_parameters) # type: ignore
265265

266266
def _get_tools(self, model_request_parameters: ModelRequestParameters) -> list[ToolDict] | None:
267+
if model_request_parameters.builtin_tools:
268+
if model_request_parameters.output_tools:
269+
raise UserError(
270+
'Gemini does not support output tools and built-in tools at the same time. Use `output_type=PromptedOutput(...)` instead.'
271+
)
272+
if model_request_parameters.function_tools:
273+
raise UserError('Gemini does not support user tools and built-in tools at the same time.')
274+
267275
tools: list[ToolDict] = [
268276
ToolDict(function_declarations=[_function_declaration_from_tool(t)])
269277
for t in model_request_parameters.tool_defs.values()
@@ -334,7 +342,9 @@ async def _build_content_and_config(
334342
response_schema = None
335343
if model_request_parameters.output_mode == 'native':
336344
if tools:
337-
raise UserError('Gemini does not support structured output and tools at the same time.')
345+
raise UserError(
346+
'Gemini does not support `NativeOutput` and tools at the same time. Use `output_type=ToolOutput(...)` instead.'
347+
)
338348
response_mime_type = 'application/json'
339349
output_object = model_request_parameters.output_object
340350
assert output_object is not None

0 commit comments

Comments
 (0)