Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 27 additions & 28 deletions docs/builtin-tools.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Builtin Tools

Builtin tools are native tools provided by LLM providers that can be used to enhance your agent's capabilities. Unlike [common tools](common-tools.md), which are custom implementations that PydanticAI executes, builtin tools are executed directly by the model provider.
Builtin tools are native tools provided by LLM providers that can be used to enhance your agent's capabilities. Unlike [common tools](common-tools.md), which are custom implementations that Pydantic AI executes, builtin tools are executed directly by the model provider.

## Overview

PydanticAI supports the following builtin tools:
Pydantic AI supports the following builtin tools:

- **[`WebSearchTool`][pydantic_ai.builtin_tools.WebSearchTool]**: Allows agents to search the web
- **[`CodeExecutionTool`][pydantic_ai.builtin_tools.CodeExecutionTool]**: Enables agents to execute code in a secure environment
Expand All @@ -13,7 +13,9 @@ PydanticAI supports the following builtin tools:
These tools are passed to the agent via the `builtin_tools` parameter and are executed by the model provider's infrastructure.

!!! warning "Provider Support"
Not all model providers support builtin tools. If you use a builtin tool with an unsupported provider, PydanticAI will raise a [`UserError`][pydantic_ai.exceptions.UserError] when you try to run the agent.
Not all model providers support builtin tools. If you use a builtin tool with an unsupported provider, Pydantic AI will raise a [`UserError`][pydantic_ai.exceptions.UserError] when you try to run the agent.

If a provider supports a built-in tool that is not currently supported by Pydantic AI, please file an issue.

## Web Search Tool

Expand All @@ -26,16 +28,13 @@ making it ideal for queries that require up-to-date data.
|----------|-----------|-------|
| OpenAI | ✅ | Full feature support |
| Anthropic | ✅ | Full feature support |
| Groq | ✅ | Limited parameter support |
| Google | ✅ | No parameter support |
| Groq | ✅ | Limited parameter support. To use web search capabilities with Groq, you need to use the [compound models](https://console.groq.com/docs/compound). |
| Google | ✅ | No parameter support. Google does not support using built-in tools and user tools (including [output tools](output.md#tool-output)) at the same time. To use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
| Bedrock | ❌ | Not supported |
| Mistral | ❌ | Not supported |
| Cohere | ❌ | Not supported |
| HuggingFace | ❌ | Not supported |

!!! note "Groq Support"
To use web search capabilities with Groq, you need to use the [compound models](https://console.groq.com/docs/compound).

### Usage

```py title="web_search_basic.py"
Expand Down Expand Up @@ -97,16 +96,16 @@ in a secure environment, making it perfect for computational tasks, data analysi

### Provider Support

| Provider | Supported |
|----------|-----------|
| OpenAI | ✅ |
| Anthropic | ✅ |
| Google | ✅ |
| Groq | ❌ |
| Bedrock | ❌ |
| Mistral | ❌ |
| Cohere | ❌ |
| HuggingFace | ❌ |
| Provider | Supported | Notes |
|----------|-----------|-------|
| OpenAI | ✅ | |
| Anthropic | ✅ | Google does not support using built-in tools and user tools (including [output tools](output.md#tool-output)) at the same time. To use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
| Google | ✅ | |
| Groq | ❌ | |
| Bedrock | ❌ | |
| Mistral | ❌ | |
| Cohere | ❌ | |
| HuggingFace | ❌ | |

### Usage

Expand All @@ -126,16 +125,16 @@ allowing it to pull up-to-date information from the web.

### Provider Support

| Provider | Supported |
|----------|-----------|
| Google | ✅ |
| OpenAI | ❌ |
| Anthropic | ❌ |
| Groq | ❌ |
| Bedrock | ❌ |
| Mistral | ❌ |
| Cohere | ❌ |
| HuggingFace | ❌ |
| Provider | Supported | Notes |
|----------|-----------|-------|
| Google | ✅ | Google does not support using built-in tools and user tools (including [output tools](output.md#tool-output)) at the same time. To use structured output, use [`PromptedOutput`](output.md#prompted-output) instead. |
| OpenAI | ❌ | |
| Anthropic | ❌ | |
| Groq | ❌ | |
| Bedrock | ❌ | |
| Mistral | ❌ | |
| Cohere | ❌ | |
| HuggingFace | ❌ | |

### Usage

Expand Down
4 changes: 3 additions & 1 deletion pydantic_ai_slim/pydantic_ai/models/gemini.py
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,9 @@ async def _make_request(
generation_config = _settings_to_generation_config(model_settings)
if model_request_parameters.output_mode == 'native':
if tools:
raise UserError('Gemini does not support structured output and tools at the same time.')
raise UserError(
'Gemini does not support `NativeOutput` and tools at the same time. Use `output_type=ToolOutput(...)` instead.'
)

generation_config['response_mime_type'] = 'application/json'

Expand Down
12 changes: 11 additions & 1 deletion pydantic_ai_slim/pydantic_ai/models/google.py
Original file line number Diff line number Diff line change
Expand Up @@ -264,6 +264,14 @@ async def request_stream(
yield await self._process_streamed_response(response, model_request_parameters) # type: ignore

def _get_tools(self, model_request_parameters: ModelRequestParameters) -> list[ToolDict] | None:
if model_request_parameters.builtin_tools:
if model_request_parameters.output_tools:
raise UserError(
'Gemini does not support output tools and built-in tools at the same time. Use `output_type=PromptedOutput(...)` instead.'
)
if model_request_parameters.function_tools:
raise UserError('Gemini does not support user tools and built-in tools at the same time.')

tools: list[ToolDict] = [
ToolDict(function_declarations=[_function_declaration_from_tool(t)])
for t in model_request_parameters.tool_defs.values()
Expand Down Expand Up @@ -334,7 +342,9 @@ async def _build_content_and_config(
response_schema = None
if model_request_parameters.output_mode == 'native':
if tools:
raise UserError('Gemini does not support structured output and tools at the same time.')
raise UserError(
'Gemini does not support `NativeOutput` and tools at the same time. Use `output_type=ToolOutput(...)` instead.'
)
response_mime_type = 'application/json'
output_object = model_request_parameters.output_object
assert output_object is not None
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
interactions:
- request:
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '526'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
method: POST
parsed_body:
contents:
- parts:
- text: What is the largest city in Mexico?
role: user
generationConfig: {}
systemInstruction:
parts:
- text: |-
Always respond with a JSON object that's compatible with this schema:

{"properties": {"city": {"type": "string"}, "country": {"type": "string"}}, "required": ["city", "country"], "title": "CityLocation", "type": "object"}

Don't include any text or Markdown fencing before or after.
role: user
tools:
- urlContext: {}
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent
response:
headers:
alt-svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
content-length:
- '626'
content-type:
- application/json; charset=UTF-8
server-timing:
- gfet4t7; dur=780
transfer-encoding:
- chunked
vary:
- Origin
- X-Origin
- Referer
parsed_body:
candidates:
- content:
parts:
- text: '{"city": "Mexico City", "country": "Mexico"}'
role: model
finishReason: STOP
groundingMetadata: {}
index: 0
modelVersion: gemini-2.5-flash
responseId: 6Xq3aPnXNtqKqtsP8ZuDyAc
usageMetadata:
candidatesTokenCount: 13
promptTokenCount: 83
promptTokensDetails:
- modality: TEXT
tokenCount: 83
thoughtsTokenCount: 33
totalTokenCount: 129
status:
code: 200
message: OK
version: 1
8 changes: 7 additions & 1 deletion tests/models/test_gemini.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

import datetime
import json
import re
from collections.abc import AsyncIterator, Callable, Sequence
from dataclasses import dataclass
from datetime import timezone
Expand Down Expand Up @@ -1868,7 +1869,12 @@ class CityLocation(BaseModel):
async def get_user_country() -> str:
return 'Mexico' # pragma: no cover

with pytest.raises(UserError, match='Gemini does not support structured output and tools at the same time.'):
with pytest.raises(
UserError,
match=re.escape(
'Gemini does not support `NativeOutput` and tools at the same time. Use `output_type=ToolOutput(...)` instead.'
),
):
await agent.run('What is the largest city in the user country?')


Expand Down
42 changes: 41 additions & 1 deletion tests/models/test_google.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

import datetime
import os
import re
from typing import Any

import pytest
Expand Down Expand Up @@ -1418,7 +1419,12 @@ class CityLocation(BaseModel):
async def get_user_country() -> str:
return 'Mexico' # pragma: no cover

with pytest.raises(UserError, match='Gemini does not support structured output and tools at the same time.'):
with pytest.raises(
UserError,
match=re.escape(
'Gemini does not support `NativeOutput` and tools at the same time. Use `output_type=ToolOutput(...)` instead.'
),
):
await agent.run('What is the largest city in the user country?')


Expand Down Expand Up @@ -1787,3 +1793,37 @@ def test_map_usage():
},
)
)


async def test_google_builtin_tools_with_other_tools(allow_model_requests: None, google_provider: GoogleProvider):
m = GoogleModel('gemini-2.5-flash', provider=google_provider)

agent = Agent(m, builtin_tools=[UrlContextTool()])

@agent.tool_plain
async def get_user_country() -> str:
return 'Mexico' # pragma: no cover

with pytest.raises(
UserError,
match=re.escape('Gemini does not support user tools and built-in tools at the same time.'),
):
await agent.run('What is the largest city in the user country?')

class CityLocation(BaseModel):
city: str
country: str

agent = Agent(m, output_type=ToolOutput(CityLocation), builtin_tools=[UrlContextTool()])

with pytest.raises(
UserError,
match=re.escape(
'Gemini does not support output tools and built-in tools at the same time. Use `output_type=PromptedOutput(...)` instead.'
),
):
await agent.run('What is the largest city in Mexico?')

agent = Agent(m, output_type=PromptedOutput(CityLocation), builtin_tools=[UrlContextTool()])
result = await agent.run('What is the largest city in Mexico?')
assert result.output == snapshot(CityLocation(city='Mexico City', country='Mexico'))