diff --git a/docs/deferred-tools.md b/docs/deferred-tools.md new file mode 100644 index 0000000000..d3d7917ba2 --- /dev/null +++ b/docs/deferred-tools.md @@ -0,0 +1,324 @@ +# Deferred Tools + +There are a few scenarios where the model should be able to call a tool that should not or cannot be executed during the same agent run inside the same Python process: + +- it may need to be approved by the user first +- it may depend on an upstream service, frontend, or user to provide the result +- the result could take longer to generate than it's reasonable to keep the agent process running + +To support these use cases, Pydantic AI provides the concept of deferred tools, which come in two flavors documented below: + +- tools that [require approval](#human-in-the-loop-tool-approval) +- tools that are [executed externally](#external-tool-execution) + +When the model calls a deferred tool, the agent run will end with a [`DeferredToolRequests`][pydantic_ai.output.DeferredToolRequests] output object containing information about the deferred tool calls. Once the approvals and/or results are ready, a new agent run can then be started with the original run's [message history](message-history.md) plus a [`DeferredToolResults`][pydantic_ai.tools.DeferredToolResults] object holding results for each tool call in `DeferredToolRequests`, which will continue the original run where it left off. + +Note that handling deferred tool calls requires `DeferredToolRequests` to be in the `Agent`'s [`output_type`](output.md#structured-output) so that the possible types of the agent run output are correctly inferred. If your agent can also be used in a context where no deferred tools are available and you don't want to deal with that type everywhere you use the agent, you can instead pass the `output_type` argument when you run the agent using [`agent.run()`][pydantic_ai.agent.AbstractAgent.run], [`agent.run_sync()`][pydantic_ai.agent.AbstractAgent.run_sync], [`agent.run_stream()`][pydantic_ai.agent.AbstractAgent.run_stream], or [`agent.iter()`][pydantic_ai.Agent.iter]. Note that the run-time `output_type` overrides the one specified at construction time (for type inference reasons), so you'll need to include the original output type explicitly. + +## Human-in-the-Loop Tool Approval + +If a tool function always requires approval, you can pass the `requires_approval=True` argument to the [`@agent.tool`][pydantic_ai.Agent.tool] decorator, [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] decorator, [`Tool`][pydantic_ai.tools.Tool] class, [`FunctionToolset.tool`][pydantic_ai.toolsets.FunctionToolset.tool] decorator, or [`FunctionToolset.add_function()`][pydantic_ai.toolsets.FunctionToolset.add_function] method. Inside the function, you can then assume that the tool call has been approved. + +If whether a tool function requires approval depends on the tool call arguments or the agent [run context][pydantic_ai.tools.RunContext] (e.g. [dependencies](dependencies.md) or message history), you can raise the [`ApprovalRequired`][pydantic_ai.exceptions.ApprovalRequired] exception from the tool function. The [`RunContext.tool_call_approved`][pydantic_ai.tools.RunContext.tool_call_approved] property will be `True` if the tool call has already been approved. + +To require approval for calls to tools provided by a [toolset](toolsets.md) (like an [MCP server](mcp/client.md)), see the [`ApprovalRequiredToolset` documentation](toolsets.md#requiring-tool-approval). + +When the model calls a tool that requires approval, the agent run will end with a [`DeferredToolRequests`][pydantic_ai.output.DeferredToolRequests] output object with an `approvals` list holding [`ToolCallPart`s][pydantic_ai.messages.ToolCallPart] containing the tool name, validated arguments, and a unique tool call ID. + +Once you've gathered the user's approvals or denials, you can build a [`DeferredToolResults`][pydantic_ai.tools.DeferredToolResults] object with an `approvals` dictionary that maps each tool call ID to a boolean, a [`ToolApproved`][pydantic_ai.tools.ToolApproved] object (with optional `override_args`), or a [`ToolDenied`][pydantic_ai.tools.ToolDenied] object (with an optional custom `message` to provide to the model). This `DeferredToolResults` object can then be provided to one of the agent run methods as `deferred_tool_results`, alongside the original run's [message history](message-history.md). + +Here's an example that shows how to require approval for all file deletions, and for updates of specific protected files: + +```python {title="tool_requires_approval.py"} +from pydantic_ai import ( + Agent, + ApprovalRequired, + DeferredToolRequests, + DeferredToolResults, + RunContext, + ToolDenied, +) + +agent = Agent('openai:gpt-5', output_type=[str, DeferredToolRequests]) + +PROTECTED_FILES = {'.env'} + + +@agent.tool +def update_file(ctx: RunContext, path: str, content: str) -> str: + if path in PROTECTED_FILES and not ctx.tool_call_approved: + raise ApprovalRequired + return f'File {path!r} updated: {content!r}' + + +@agent.tool_plain(requires_approval=True) +def delete_file(path: str) -> str: + return f'File {path!r} deleted' + + +result = agent.run_sync('Delete `__init__.py`, write `Hello, world!` to `README.md`, and clear `.env`') +messages = result.all_messages() + +assert isinstance(result.output, DeferredToolRequests) +requests = result.output +print(requests) +""" +DeferredToolRequests( + calls=[], + approvals=[ + ToolCallPart( + tool_name='update_file', + args={'path': '.env', 'content': ''}, + tool_call_id='update_file_dotenv', + ), + ToolCallPart( + tool_name='delete_file', + args={'path': '__init__.py'}, + tool_call_id='delete_file', + ), + ], +) +""" + +results = DeferredToolResults() +for call in requests.approvals: + result = False + if call.tool_name == 'update_file': + # Approve all updates + result = True + elif call.tool_name == 'delete_file': + # deny all deletes + result = ToolDenied('Deleting files is not allowed') + + results.approvals[call.tool_call_id] = result + +result = agent.run_sync(message_history=messages, deferred_tool_results=results) +print(result.output) +""" +I successfully updated `README.md` and cleared `.env`, but was not able to delete `__init__.py`. +""" +print(result.all_messages()) +""" +[ + ModelRequest( + parts=[ + UserPromptPart( + content='Delete `__init__.py`, write `Hello, world!` to `README.md`, and clear `.env`', + timestamp=datetime.datetime(...), + ) + ] + ), + ModelResponse( + parts=[ + ToolCallPart( + tool_name='delete_file', + args={'path': '__init__.py'}, + tool_call_id='delete_file', + ), + ToolCallPart( + tool_name='update_file', + args={'path': 'README.md', 'content': 'Hello, world!'}, + tool_call_id='update_file_readme', + ), + ToolCallPart( + tool_name='update_file', + args={'path': '.env', 'content': ''}, + tool_call_id='update_file_dotenv', + ), + ], + usage=RequestUsage(input_tokens=63, output_tokens=21), + model_name='gpt-5', + timestamp=datetime.datetime(...), + ), + ModelRequest( + parts=[ + ToolReturnPart( + tool_name='delete_file', + content='Deleting files is not allowed', + tool_call_id='delete_file', + timestamp=datetime.datetime(...), + ), + ToolReturnPart( + tool_name='update_file', + content="File 'README.md' updated: 'Hello, world!'", + tool_call_id='update_file_readme', + timestamp=datetime.datetime(...), + ), + ToolReturnPart( + tool_name='update_file', + content="File '.env' updated: ''", + tool_call_id='update_file_dotenv', + timestamp=datetime.datetime(...), + ), + ] + ), + ModelResponse( + parts=[ + TextPart( + content='I successfully updated `README.md` and cleared `.env`, but was not able to delete `__init__.py`.' + ) + ], + usage=RequestUsage(input_tokens=79, output_tokens=39), + model_name='gpt-5', + timestamp=datetime.datetime(...), + ), +] +""" +``` + +_(This example is complete, it can be run "as is")_ + +## External Tool Execution + +When the result of a tool call cannot be generated inside the same agent run in which it was called, the tool is considered to be external. +Examples of external tools are client-side tools implemented by a web or app frontend, and slow tasks that are passed off to a background worker or external service instead of keeping the agent process running. + +If whether a tool call should be executed externally depends on the tool call arguments, the agent [run context][pydantic_ai.tools.RunContext] (e.g. [dependencies](dependencies.md) or message history), or how long the task is expected to take, you can define a tool function and conditionally raise the [`CallDeferred`][pydantic_ai.exceptions.CallDeferred] exception. Before raising the exception, the tool function would typically schedule some background task and pass along the [`RunContext.tool_call_id`][pydantic_ai.tools.RunContext.tool_call_id] so that the result can be matched to the deferred tool call later. + +If a tool is always executed externally and its definition is provided to your code along with a JSON schema for its arguments, you can use an [`ExternalToolset`](toolsets.md#external-toolset). If the external tools are known up front and you don't have the arguments JSON schema handy, you can also define a tool function with the appropriate signature that does nothing but raise the [`CallDeferred`][pydantic_ai.exceptions.CallDeferred] exception. + +When the model calls an external tool, the agent run will end with a [`DeferredToolRequests`][pydantic_ai.output.DeferredToolRequests] output object with a `calls` list holding [`ToolCallPart`s][pydantic_ai.messages.ToolCallPart] containing the tool name, validated arguments, and a unique tool call ID. + +Once the tool call results are ready, you can build a [`DeferredToolResults`][pydantic_ai.tools.DeferredToolResults] object with a `calls` dictionary that maps each tool call ID to an arbitrary value to be returned to the model, a [`ToolReturn`](tools-advanced.md#advanced-tool-returns) object, or a [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] exception in case the tool call failed and the model should [try again](tools-advanced.md#tool-retries). This `DeferredToolResults` object can then be provided to one of the agent run methods as `deferred_tool_results`, alongside the original run's [message history](message-history.md). + +Here's an example that shows how to move a task that takes a while to complete to the background and return the result to the model once the task is complete: + +```python {title="external_tool.py"} +import asyncio +from dataclasses import dataclass +from typing import Any + +from pydantic_ai import ( + Agent, + CallDeferred, + DeferredToolRequests, + DeferredToolResults, + ModelRetry, + RunContext, +) + + +@dataclass +class TaskResult: + tool_call_id: str + result: Any + + +async def calculate_answer_task(tool_call_id: str, question: str) -> TaskResult: + await asyncio.sleep(1) + return TaskResult(tool_call_id=tool_call_id, result=42) + + +agent = Agent('openai:gpt-5', output_type=[str, DeferredToolRequests]) + +tasks: list[asyncio.Task[TaskResult]] = [] + + +@agent.tool +async def calculate_answer(ctx: RunContext, question: str) -> str: + assert ctx.tool_call_id is not None + + task = asyncio.create_task(calculate_answer_task(ctx.tool_call_id, question)) # (1)! + tasks.append(task) + + raise CallDeferred + + +async def main(): + result = await agent.run('Calculate the answer to the ultimate question of life, the universe, and everything') + messages = result.all_messages() + + assert isinstance(result.output, DeferredToolRequests) + requests = result.output + print(requests) + """ + DeferredToolRequests( + calls=[ + ToolCallPart( + tool_name='calculate_answer', + args={ + 'question': 'the ultimate question of life, the universe, and everything' + }, + tool_call_id='pyd_ai_tool_call_id', + ) + ], + approvals=[], + ) + """ + + done, _ = await asyncio.wait(tasks) # (2)! + task_results = [task.result() for task in done] + task_results_by_tool_call_id = {result.tool_call_id: result.result for result in task_results} + + results = DeferredToolResults() + for call in requests.calls: + try: + result = task_results_by_tool_call_id[call.tool_call_id] + except KeyError: + result = ModelRetry('No result for this tool call was found.') + + results.calls[call.tool_call_id] = result + + result = await agent.run(message_history=messages, deferred_tool_results=results) + print(result.output) + #> The answer to the ultimate question of life, the universe, and everything is 42. + print(result.all_messages()) + """ + [ + ModelRequest( + parts=[ + UserPromptPart( + content='Calculate the answer to the ultimate question of life, the universe, and everything', + timestamp=datetime.datetime(...), + ) + ] + ), + ModelResponse( + parts=[ + ToolCallPart( + tool_name='calculate_answer', + args={ + 'question': 'the ultimate question of life, the universe, and everything' + }, + tool_call_id='pyd_ai_tool_call_id', + ) + ], + usage=RequestUsage(input_tokens=63, output_tokens=13), + model_name='gpt-5', + timestamp=datetime.datetime(...), + ), + ModelRequest( + parts=[ + ToolReturnPart( + tool_name='calculate_answer', + content=42, + tool_call_id='pyd_ai_tool_call_id', + timestamp=datetime.datetime(...), + ) + ] + ), + ModelResponse( + parts=[ + TextPart( + content='The answer to the ultimate question of life, the universe, and everything is 42.' + ) + ], + usage=RequestUsage(input_tokens=64, output_tokens=28), + model_name='gpt-5', + timestamp=datetime.datetime(...), + ), + ] + """ +``` + +1. In reality, you'd likely use Celery or a similar task queue to run the task in the background. +2. In reality, this would typically happen in a separate process that polls for the task status or is notified when all pending tasks are complete. + +_(This example is complete, it can be run "as is" — you'll need to add `asyncio.run(main())` to run `main`)_ + +## See Also + +- [Function Tools](tools.md) - Basic tool concepts and registration +- [Advanced Tool Features](tools-advanced.md) - Custom schemas, dynamic tools, and execution details +- [Toolsets](toolsets.md) - Managing collections of tools, including `ExternalToolset` for external tools +- [Message History](message-history.md) - Understanding how to work with message history for deferred tools diff --git a/docs/output.md b/docs/output.md index 57f0479e22..cddc326870 100644 --- a/docs/output.md +++ b/docs/output.md @@ -272,7 +272,7 @@ In the default Tool Output mode, the output JSON schema of each output type (or If you'd like to change the name of the output tool, pass a custom description to aid the model, or turn on or off strict mode, you can wrap the type(s) in the [`ToolOutput`][pydantic_ai.output.ToolOutput] marker class and provide the appropriate arguments. Note that by default, the description is taken from the docstring specified on a Pydantic model or output function, so specifying it using the marker class is typically not necessary. -To dynamically modify or filter the available output tools during an agent run, you can define an agent-wide `prepare_output_tools` function that will be called ahead of each step of a run. This function should be of type [`ToolsPrepareFunc`][pydantic_ai.tools.ToolsPrepareFunc], which takes the [`RunContext`][pydantic_ai.tools.RunContext] and a list of [`ToolDefinition`][pydantic_ai.tools.ToolDefinition], and returns a new list of tool definitions (or `None` to disable all tools for that step). This is analogous to the [`prepare_tools` function](tools.md#prepare-tools) for non-output tools. +To dynamically modify or filter the available output tools during an agent run, you can define an agent-wide `prepare_output_tools` function that will be called ahead of each step of a run. This function should be of type [`ToolsPrepareFunc`][pydantic_ai.tools.ToolsPrepareFunc], which takes the [`RunContext`][pydantic_ai.tools.RunContext] and a list of [`ToolDefinition`][pydantic_ai.tools.ToolDefinition], and returns a new list of tool definitions (or `None` to disable all tools for that step). This is analogous to the [`prepare_tools` function](tools-advanced.md#prepare-tools) for non-output tools. ```python {title="tool_output.py"} from pydantic import BaseModel diff --git a/docs/third-party-tools.md b/docs/third-party-tools.md new file mode 100644 index 0000000000..8c7c3616ed --- /dev/null +++ b/docs/third-party-tools.md @@ -0,0 +1,109 @@ +# Third-Party Tools + +Pydantic AI supports integration with various third-party tool libraries, allowing you to leverage existing tool ecosystems in your agents. + +## MCP Tools {#mcp-tools} + +See the [MCP Client](./mcp/client.md) documentation for how to use MCP servers with Pydantic AI as [toolsets](toolsets.md). + +## LangChain Tools {#langchain-tools} + +If you'd like to use a tool from LangChain's [community tool library](https://python.langchain.com/docs/integrations/tools/) with Pydantic AI, you can use the [`tool_from_langchain`][pydantic_ai.ext.langchain.tool_from_langchain] convenience method. Note that Pydantic AI will not validate the arguments in this case -- it's up to the model to provide arguments matching the schema specified by the LangChain tool, and up to the LangChain tool to raise an error if the arguments are invalid. + +You will need to install the `langchain-community` package and any others required by the tool in question. + +Here is how you can use the LangChain `DuckDuckGoSearchRun` tool, which requires the `ddgs` package: + +```python {test="skip"} +from langchain_community.tools import DuckDuckGoSearchRun + +from pydantic_ai import Agent +from pydantic_ai.ext.langchain import tool_from_langchain + +search = DuckDuckGoSearchRun() +search_tool = tool_from_langchain(search) + +agent = Agent( + 'google-gla:gemini-2.0-flash', + tools=[search_tool], +) + +result = agent.run_sync('What is the release date of Elden Ring Nightreign?') # (1)! +print(result.output) +#> Elden Ring Nightreign is planned to be released on May 30, 2025. +``` + +1. The release date of this game is the 30th of May 2025, which is after the knowledge cutoff for Gemini 2.0 (August 2024). + +If you'd like to use multiple LangChain tools or a LangChain [toolkit](https://python.langchain.com/docs/concepts/tools/#toolkits), you can use the [`LangChainToolset`][pydantic_ai.ext.langchain.LangChainToolset] [toolset](toolsets.md) which takes a list of LangChain tools: + +```python {test="skip"} +from langchain_community.agent_toolkits import SlackToolkit + +from pydantic_ai import Agent +from pydantic_ai.ext.langchain import LangChainToolset + +toolkit = SlackToolkit() +toolset = LangChainToolset(toolkit.get_tools()) + +agent = Agent('openai:gpt-4o', toolsets=[toolset]) +# ... +``` + +## ACI.dev Tools {#aci-tools} + +If you'd like to use a tool from the [ACI.dev tool library](https://www.aci.dev/tools) with Pydantic AI, you can use the [`tool_from_aci`][pydantic_ai.ext.aci.tool_from_aci] convenience method. Note that Pydantic AI will not validate the arguments in this case -- it's up to the model to provide arguments matching the schema specified by the ACI tool, and up to the ACI tool to raise an error if the arguments are invalid. + +You will need to install the `aci-sdk` package, set your ACI API key in the `ACI_API_KEY` environment variable, and pass your ACI "linked account owner ID" to the function. + +Here is how you can use the ACI.dev `TAVILY__SEARCH` tool: + +```python {test="skip"} +import os + +from pydantic_ai import Agent +from pydantic_ai.ext.aci import tool_from_aci + +tavily_search = tool_from_aci( + 'TAVILY__SEARCH', + linked_account_owner_id=os.getenv('LINKED_ACCOUNT_OWNER_ID'), +) + +agent = Agent( + 'google-gla:gemini-2.0-flash', + tools=[tavily_search], +) + +result = agent.run_sync('What is the release date of Elden Ring Nightreign?') # (1)! +print(result.output) +#> Elden Ring Nightreign is planned to be released on May 30, 2025. +``` + +1. The release date of this game is the 30th of May 2025, which is after the knowledge cutoff for Gemini 2.0 (August 2024). + +If you'd like to use multiple ACI.dev tools, you can use the [`ACIToolset`][pydantic_ai.ext.aci.ACIToolset] [toolset](toolsets.md) which takes a list of ACI tool names as well as the `linked_account_owner_id`: + +```python {test="skip"} +import os + +from pydantic_ai import Agent +from pydantic_ai.ext.aci import ACIToolset + +toolset = ACIToolset( + [ + 'OPEN_WEATHER_MAP__CURRENT_WEATHER', + 'OPEN_WEATHER_MAP__FORECAST', + ], + linked_account_owner_id=os.getenv('LINKED_ACCOUNT_OWNER_ID'), +) + +agent = Agent('openai:gpt-4o', toolsets=[toolset]) +``` + +## See Also + +- [Function Tools](tools.md) - Basic tool concepts and registration +- [Toolsets](toolsets.md) - Managing collections of tools +- [MCP Client](mcp/client.md) - Using MCP servers with Pydantic AI +- [LangChain Toolsets](toolsets.md#langchain-tools) - Using LangChain toolsets +- [ACI.dev Toolsets](toolsets.md#aci-tools) - Using ACI.dev toolsets diff --git a/docs/tools-advanced.md b/docs/tools-advanced.md new file mode 100644 index 0000000000..ba993a37a3 --- /dev/null +++ b/docs/tools-advanced.md @@ -0,0 +1,385 @@ +# Advanced Tool Features + +This page covers advanced features for function tools in Pydantic AI. For basic tool usage, see the [Function Tools](tools.md) documentation. + +## Tool Output {#function-tool-output} + +Tools can return anything that Pydantic can serialize to JSON, as well as audio, video, image or document content depending on the types of [multi-modal input](input.md) the model supports: + +```python {title="function_tool_output.py"} +from datetime import datetime + +from pydantic import BaseModel + +from pydantic_ai import Agent, DocumentUrl, ImageUrl +from pydantic_ai.models.openai import OpenAIResponsesModel + + +class User(BaseModel): + name: str + age: int + + +agent = Agent(model=OpenAIResponsesModel('gpt-4o')) + + +@agent.tool_plain +def get_current_time() -> datetime: + return datetime.now() + + +@agent.tool_plain +def get_user() -> User: + return User(name='John', age=30) + + +@agent.tool_plain +def get_company_logo() -> ImageUrl: + return ImageUrl(url='https://iili.io/3Hs4FMg.png') + + +@agent.tool_plain +def get_document() -> DocumentUrl: + return DocumentUrl(url='https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf') + + +result = agent.run_sync('What time is it?') +print(result.output) +#> The current time is 10:45 PM on April 17, 2025. + +result = agent.run_sync('What is the user name?') +print(result.output) +#> The user's name is John. + +result = agent.run_sync('What is the company name in the logo?') +print(result.output) +#> The company name in the logo is "Pydantic." + +result = agent.run_sync('What is the main content of the document?') +print(result.output) +#> The document contains just the text "Dummy PDF file." +``` + +_(This example is complete, it can be run "as is")_ + +Some models (e.g. Gemini) natively support semi-structured return values, while some expect text (OpenAI) but seem to be just as good at extracting meaning from the data. If a Python object is returned and the model expects a string, the value will be serialized to JSON. + +### Advanced Tool Returns + +For scenarios where you need more control over both the tool's return value and the content sent to the model, you can use [`ToolReturn`][pydantic_ai.messages.ToolReturn]. This is particularly useful when you want to: + +- Provide rich multi-modal content (images, documents, etc.) to the model as context +- Separate the programmatic return value from the model's context +- Include additional metadata that shouldn't be sent to the LLM + +Here's an example of a computer automation tool that captures screenshots and provides visual feedback: + +```python {title="advanced_tool_return.py" test="skip" lint="skip"} +import time +from pydantic_ai import Agent +from pydantic_ai.messages import ToolReturn, BinaryContent + +agent = Agent('openai:gpt-4o') + +@agent.tool_plain +def click_and_capture(x: int, y: int) -> ToolReturn: + """Click at coordinates and show before/after screenshots.""" + # Take screenshot before action + before_screenshot = capture_screen() + + # Perform click operation + perform_click(x, y) + time.sleep(0.5) # Wait for UI to update + + # Take screenshot after action + after_screenshot = capture_screen() + + return ToolReturn( + return_value=f"Successfully clicked at ({x}, {y})", + content=[ + f"Clicked at coordinates ({x}, {y}). Here's the comparison:", + "Before:", + BinaryContent(data=before_screenshot, media_type="image/png"), + "After:", + BinaryContent(data=after_screenshot, media_type="image/png"), + "Please analyze the changes and suggest next steps." + ], + metadata={ + "coordinates": {"x": x, "y": y}, + "action_type": "click_and_capture", + "timestamp": time.time() + } + ) + +# The model receives the rich visual content for analysis +# while your application can access the structured return_value and metadata +result = agent.run_sync("Click on the submit button and tell me what happened") +print(result.output) +# The model can analyze the screenshots and provide detailed feedback +``` + +- **`return_value`**: The actual return value used in the tool response. This is what gets serialized and sent back to the model as the tool's result. +- **`content`**: A sequence of content (text, images, documents, etc.) that provides additional context to the model. This appears as a separate user message. +- **`metadata`**: Optional metadata that your application can access but is not sent to the LLM. Useful for logging, debugging, or additional processing. Some other AI frameworks call this feature "artifacts". + +This separation allows you to provide rich context to the model while maintaining clean, structured return values for your application logic. + +## Custom Tool Schema + +If you have a function that lacks appropriate documentation (i.e. poorly named, no type information, poor docstring, use of \*args or \*\*kwargs and suchlike) then you can still turn it into a tool that can be effectively used by the agent with the [`Tool.from_schema`][pydantic_ai.Tool.from_schema] function. With this you provide the name, description, JSON schema, and whether the function takes a `RunContext` for the function directly: + +```python +from pydantic_ai import Agent, Tool +from pydantic_ai.models.test import TestModel + + +def foobar(**kwargs) -> str: + return kwargs['a'] + kwargs['b'] + +tool = Tool.from_schema( + function=foobar, + name='sum', + description='Sum two numbers.', + json_schema={ + 'additionalProperties': False, + 'properties': { + 'a': {'description': 'the first number', 'type': 'integer'}, + 'b': {'description': 'the second number', 'type': 'integer'}, + }, + 'required': ['a', 'b'], + 'type': 'object', + }, + takes_ctx=False, +) + +test_model = TestModel() +agent = Agent(test_model, tools=[tool]) + +result = agent.run_sync('testing...') +print(result.output) +#> {"sum":0} +``` + +Please note that validation of the tool arguments will not be performed, and this will pass all arguments as keyword arguments. + +## Dynamic Tools {#tool-prepare} + +Tools can optionally be defined with another function: `prepare`, which is called at each step of a run to +customize the definition of the tool passed to the model, or omit the tool completely from that step. + +A `prepare` method can be registered via the `prepare` kwarg to any of the tool registration mechanisms: + +- [`@agent.tool`][pydantic_ai.Agent.tool] decorator +- [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] decorator +- [`Tool`][pydantic_ai.tools.Tool] dataclass + +The `prepare` method, should be of type [`ToolPrepareFunc`][pydantic_ai.tools.ToolPrepareFunc], a function which takes [`RunContext`][pydantic_ai.tools.RunContext] and a pre-built [`ToolDefinition`][pydantic_ai.tools.ToolDefinition], and should either return that `ToolDefinition` with or without modifying it, return a new `ToolDefinition`, or return `None` to indicate this tools should not be registered for that step. + +Here's a simple `prepare` method that only includes the tool if the value of the dependency is `42`. + +As with the previous example, we use [`TestModel`][pydantic_ai.models.test.TestModel] to demonstrate the behavior without calling a real model. + +```python {title="tool_only_if_42.py"} + +from pydantic_ai import Agent, RunContext, ToolDefinition + +agent = Agent('test') + + +async def only_if_42( + ctx: RunContext[int], tool_def: ToolDefinition +) -> ToolDefinition | None: + if ctx.deps == 42: + return tool_def + + +@agent.tool(prepare=only_if_42) +def hitchhiker(ctx: RunContext[int], answer: str) -> str: + return f'{ctx.deps} {answer}' + + +result = agent.run_sync('testing...', deps=41) +print(result.output) +#> success (no tool calls) +result = agent.run_sync('testing...', deps=42) +print(result.output) +#> {"hitchhiker":"42 a"} +``` + +_(This example is complete, it can be run "as is")_ + +Here's a more complex example where we change the description of the `name` parameter to based on the value of `deps` + +For the sake of variation, we create this tool using the [`Tool`][pydantic_ai.tools.Tool] dataclass. + +```python {title="customize_name.py"} +from __future__ import annotations + +from typing import Literal + +from pydantic_ai import Agent, RunContext, Tool, ToolDefinition +from pydantic_ai.models.test import TestModel + + +def greet(name: str) -> str: + return f'hello {name}' + + +async def prepare_greet( + ctx: RunContext[Literal['human', 'machine']], tool_def: ToolDefinition +) -> ToolDefinition | None: + d = f'Name of the {ctx.deps} to greet.' + tool_def.parameters_json_schema['properties']['name']['description'] = d + return tool_def + + +greet_tool = Tool(greet, prepare=prepare_greet) +test_model = TestModel() +agent = Agent(test_model, tools=[greet_tool], deps_type=Literal['human', 'machine']) + +result = agent.run_sync('testing...', deps='human') +print(result.output) +#> {"greet":"hello a"} +print(test_model.last_model_request_parameters.function_tools) +""" +[ + ToolDefinition( + name='greet', + parameters_json_schema={ + 'additionalProperties': False, + 'properties': { + 'name': {'type': 'string', 'description': 'Name of the human to greet.'} + }, + 'required': ['name'], + 'type': 'object', + }, + ) +] +""" +``` + +_(This example is complete, it can be run "as is")_ + +### Agent-wide Dynamic Tools {#prepare-tools} + +In addition to per-tool `prepare` methods, you can also define an agent-wide `prepare_tools` function. This function is called at each step of a run and allows you to filter or modify the list of all tool definitions available to the agent for that step. This is especially useful if you want to enable or disable multiple tools at once, or apply global logic based on the current context. + +The `prepare_tools` function should be of type [`ToolsPrepareFunc`][pydantic_ai.tools.ToolsPrepareFunc], which takes the [`RunContext`][pydantic_ai.tools.RunContext] and a list of [`ToolDefinition`][pydantic_ai.tools.ToolDefinition], and returns a new list of tool definitions (or `None` to disable all tools for that step). + +!!! note + The list of tool definitions passed to `prepare_tools` includes both regular function tools and tools from any [toolsets](toolsets.md) registered on the agent, but not [output tools](output.md#tool-output). +To modify output tools, you can set a `prepare_output_tools` function instead. + +Here's an example that makes all tools strict if the model is an OpenAI model: + +```python {title="agent_prepare_tools_customize.py" noqa="I001"} +from dataclasses import replace + +from pydantic_ai import Agent, RunContext, ToolDefinition +from pydantic_ai.models.test import TestModel + + +async def turn_on_strict_if_openai( + ctx: RunContext[None], tool_defs: list[ToolDefinition] +) -> list[ToolDefinition] | None: + if ctx.model.system == 'openai': + return [replace(tool_def, strict=True) for tool_def in tool_defs] + return tool_defs + + +test_model = TestModel() +agent = Agent(test_model, prepare_tools=turn_on_strict_if_openai) + + +@agent.tool_plain +def echo(message: str) -> str: + return message + + +agent.run_sync('testing...') +assert test_model.last_model_request_parameters.function_tools[0].strict is None + +# Set the system attribute of the test_model to 'openai' +test_model._system = 'openai' + +agent.run_sync('testing with openai...') +assert test_model.last_model_request_parameters.function_tools[0].strict +``` + +_(This example is complete, it can be run "as is")_ + +Here's another example that conditionally filters out the tools by name if the dependency (`ctx.deps`) is `True`: + +```python {title="agent_prepare_tools_filter_out.py" noqa="I001"} + +from pydantic_ai import Agent, RunContext, Tool, ToolDefinition + + +def launch_potato(target: str) -> str: + return f'Potato launched at {target}!' + + +async def filter_out_tools_by_name( + ctx: RunContext[bool], tool_defs: list[ToolDefinition] +) -> list[ToolDefinition] | None: + if ctx.deps: + return [tool_def for tool_def in tool_defs if tool_def.name != 'launch_potato'] + return tool_defs + + +agent = Agent( + 'test', + tools=[Tool(launch_potato)], + prepare_tools=filter_out_tools_by_name, + deps_type=bool, +) + +result = agent.run_sync('testing...', deps=False) +print(result.output) +#> {"launch_potato":"Potato launched at a!"} +result = agent.run_sync('testing...', deps=True) +print(result.output) +#> success (no tool calls) +``` + +_(This example is complete, it can be run "as is")_ + +You can use `prepare_tools` to: + +- Dynamically enable or disable tools based on the current model, dependencies, or other context +- Modify tool definitions globally (e.g., set all tools to strict mode, change descriptions, etc.) + +If both per-tool `prepare` and agent-wide `prepare_tools` are used, the per-tool `prepare` is applied first to each tool, and then `prepare_tools` is called with the resulting list of tool definitions. + +## Tool Execution and Retries {#tool-retries} + +When a tool is executed, its arguments (provided by the LLM) are first validated against the function's signature using Pydantic. If validation fails (e.g., due to incorrect types or missing required arguments), a `ValidationError` is raised, and the framework automatically generates a [`RetryPromptPart`][pydantic_ai.messages.RetryPromptPart] containing the validation details. This prompt is sent back to the LLM, informing it of the error and allowing it to correct the parameters and retry the tool call. + +Beyond automatic validation errors, the tool's own internal logic can also explicitly request a retry by raising the [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] exception. This is useful for situations where the parameters were technically valid, but an issue occurred during execution (like a transient network error, or the tool determining the initial attempt needs modification). + +```python +from pydantic_ai import ModelRetry + + +def my_flaky_tool(query: str) -> str: + if query == 'bad': + # Tell the LLM the query was bad and it should try again + raise ModelRetry("The query 'bad' is not allowed. Please provide a different query.") + # ... process query ... + return 'Success!' +``` + +Raising `ModelRetry` also generates a `RetryPromptPart` containing the exception message, which is sent back to the LLM to guide its next attempt. Both `ValidationError` and `ModelRetry` respect the `retries` setting configured on the `Tool` or `Agent`. + +### Parallel tool calls & concurrency + +When a model returns multiple tool calls in one response, Pydantic AI schedules them concurrently using `asyncio.create_task`. + +Async functions are run on the event loop, while sync functions are offloaded to threads. To get the best performance, _always_ use an async function _unless_ you're doing blocking I/O (and there's no way to use a non-blocking library instead) or CPU-bound work (like `numpy` or `scikit-learn` operations), so that simple functions are not offloaded to threads unnecessarily. + +## See Also + +- [Function Tools](tools.md) - Basic tool concepts and registration +- [Toolsets](toolsets.md) - Managing collections of tools +- [Deferred Tools](deferred-tools.md) - Tools requiring approval or external execution +- [Third-Party Tools](third-party-tools.md) - Integrations with external tool libraries diff --git a/docs/tools.md b/docs/tools.md index 74b97d3506..9a027b1e7f 100644 --- a/docs/tools.md +++ b/docs/tools.md @@ -12,7 +12,7 @@ There are a number of ways to register tools with an agent: - via the [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] decorator — for tools that do not need access to the agent [context][pydantic_ai.tools.RunContext] - via the [`tools`][pydantic_ai.Agent.__init__] keyword argument to `Agent` which can take either plain functions, or instances of [`Tool`][pydantic_ai.tools.Tool] -For more advanced use cases, the [toolsets](toolsets.md) feature lets you manage collections of tools (built by you or provided by an [MCP server](mcp/client.md) or other [third party](#third-party-tools)) and register them with an agent in one go via the [`toolsets`][pydantic_ai.Agent.__init__] keyword argument to `Agent`. Internally, all `tools` and `toolsets` are gathered into a single [combined toolset](toolsets.md#combining-toolsets) that's made available to the model. +For more advanced use cases, the [toolsets](toolsets.md) feature lets you manage collections of tools (built by you or provided by an [MCP server](mcp/client.md) or other [third party](third-party-tools.md#third-party-tools)) and register them with an agent in one go via the [`toolsets`][pydantic_ai.Agent.__init__] keyword argument to `Agent`. Internally, all `tools` and `toolsets` are gathered into a single [combined toolset](toolsets.md#combining-toolsets) that's made available to the model. !!! info "Function tools vs. RAG" Function tools are basically the "R" of RAG (Retrieval-Augmented Generation) — they augment what the model can do by letting it request extra information. @@ -231,131 +231,13 @@ print(dice_result['b'].output) ``` 1. The simplest way to register tools via the `Agent` constructor is to pass a list of functions, the function signature is inspected to determine if the tool takes [`RunContext`][pydantic_ai.tools.RunContext]. -2. `agent_a` and `agent_b` are identical — but we can use [`Tool`][pydantic_ai.tools.Tool] to reuse tool definitions and give more fine-grained control over how tools are defined, e.g. setting their name or description, or using a custom [`prepare`](#tool-prepare) method. +2. `agent_a` and `agent_b` are identical — but we can use [`Tool`][pydantic_ai.tools.Tool] to reuse tool definitions and give more fine-grained control over how tools are defined, e.g. setting their name or description, or using a custom [`prepare`](tools-advanced.md#tool-prepare) method. _(This example is complete, it can be run "as is")_ ## Tool Output {#function-tool-output} -Tools can return anything that Pydantic can serialize to JSON, as well as audio, video, image or document content depending on the types of [multi-modal input](input.md) the model supports: - -```python {title="function_tool_output.py"} -from datetime import datetime - -from pydantic import BaseModel - -from pydantic_ai import Agent, DocumentUrl, ImageUrl -from pydantic_ai.models.openai import OpenAIResponsesModel - - -class User(BaseModel): - name: str - age: int - - -agent = Agent(model=OpenAIResponsesModel('gpt-4o')) - - -@agent.tool_plain -def get_current_time() -> datetime: - return datetime.now() - - -@agent.tool_plain -def get_user() -> User: - return User(name='John', age=30) - - -@agent.tool_plain -def get_company_logo() -> ImageUrl: - return ImageUrl(url='https://iili.io/3Hs4FMg.png') - - -@agent.tool_plain -def get_document() -> DocumentUrl: - return DocumentUrl(url='https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf') - - -result = agent.run_sync('What time is it?') -print(result.output) -#> The current time is 10:45 PM on April 17, 2025. - -result = agent.run_sync('What is the user name?') -print(result.output) -#> The user's name is John. - -result = agent.run_sync('What is the company name in the logo?') -print(result.output) -#> The company name in the logo is "Pydantic." - -result = agent.run_sync('What is the main content of the document?') -print(result.output) -#> The document contains just the text "Dummy PDF file." -``` - -_(This example is complete, it can be run "as is")_ - -Some models (e.g. Gemini) natively support semi-structured return values, while some expect text (OpenAI) but seem to be just as good at extracting meaning from the data. If a Python object is returned and the model expects a string, the value will be serialized to JSON. - -### Advanced Tool Returns - -For scenarios where you need more control over both the tool's return value and the content sent to the model, you can use [`ToolReturn`][pydantic_ai.messages.ToolReturn]. This is particularly useful when you want to: - -- Provide rich multi-modal content (images, documents, etc.) to the model as context -- Separate the programmatic return value from the model's context -- Include additional metadata that shouldn't be sent to the LLM - -Here's an example of a computer automation tool that captures screenshots and provides visual feedback: - -```python {title="advanced_tool_return.py" test="skip" lint="skip"} -import time -from pydantic_ai import Agent -from pydantic_ai.messages import ToolReturn, BinaryContent - -agent = Agent('openai:gpt-4o') - -@agent.tool_plain -def click_and_capture(x: int, y: int) -> ToolReturn: - """Click at coordinates and show before/after screenshots.""" - # Take screenshot before action - before_screenshot = capture_screen() - - # Perform click operation - perform_click(x, y) - time.sleep(0.5) # Wait for UI to update - - # Take screenshot after action - after_screenshot = capture_screen() - - return ToolReturn( - return_value=f"Successfully clicked at ({x}, {y})", - content=[ - f"Clicked at coordinates ({x}, {y}). Here's the comparison:", - "Before:", - BinaryContent(data=before_screenshot, media_type="image/png"), - "After:", - BinaryContent(data=after_screenshot, media_type="image/png"), - "Please analyze the changes and suggest next steps." - ], - metadata={ - "coordinates": {"x": x, "y": y}, - "action_type": "click_and_capture", - "timestamp": time.time() - } - ) - -# The model receives the rich visual content for analysis -# while your application can access the structured return_value and metadata -result = agent.run_sync("Click on the submit button and tell me what happened") -print(result.output) -# The model can analyze the screenshots and provide detailed feedback -``` - -- **`return_value`**: The actual return value used in the tool response. This is what gets serialized and sent back to the model as the tool's result. -- **`content`**: A sequence of content (text, images, documents, etc.) that provides additional context to the model. This appears as a separate user message. -- **`metadata`**: Optional metadata that your application can access but is not sent to the LLM. Useful for logging, debugging, or additional processing. Some other AI frameworks call this feature "artifacts". - -This separation allows you to provide rich context to the model while maintaining clean, structured return values for your application logic. +Tools can return anything that Pydantic can serialize to JSON. For advanced output options including multi-modal content and metadata, see [Advanced Tool Features](tools-advanced.md#function-tool-output). ## Tool Schema {#function-tools-and-schema} @@ -469,673 +351,14 @@ print(test_model.last_model_request_parameters.function_tools) _(This example is complete, it can be run "as is")_ -### Custom Tool Schema - -If you have a function that lacks appropriate documentation (i.e. poorly named, no type information, poor docstring, use of \*args or \*\*kwargs and suchlike) then you can still turn it into a tool that can be effectively used by the agent with the [`Tool.from_schema`][pydantic_ai.Tool.from_schema] function. With this you provide the name, description, JSON schema, and whether the function takes a `RunContext` for the function directly: - -```python -from pydantic_ai import Agent, Tool -from pydantic_ai.models.test import TestModel - - -def foobar(**kwargs) -> str: - return kwargs['a'] + kwargs['b'] - -tool = Tool.from_schema( - function=foobar, - name='sum', - description='Sum two numbers.', - json_schema={ - 'additionalProperties': False, - 'properties': { - 'a': {'description': 'the first number', 'type': 'integer'}, - 'b': {'description': 'the second number', 'type': 'integer'}, - }, - 'required': ['a', 'b'], - 'type': 'object', - }, - takes_ctx=False, -) - -test_model = TestModel() -agent = Agent(test_model, tools=[tool]) - -result = agent.run_sync('testing...') -print(result.output) -#> {"sum":0} -``` - -Please note that validation of the tool arguments will not be performed, and this will pass all arguments as keyword arguments. - -## Dynamic Tools {#tool-prepare} - -Tools can optionally be defined with another function: `prepare`, which is called at each step of a run to -customize the definition of the tool passed to the model, or omit the tool completely from that step. - -A `prepare` method can be registered via the `prepare` kwarg to any of the tool registration mechanisms: - -- [`@agent.tool`][pydantic_ai.Agent.tool] decorator -- [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] decorator -- [`Tool`][pydantic_ai.tools.Tool] dataclass - -The `prepare` method, should be of type [`ToolPrepareFunc`][pydantic_ai.tools.ToolPrepareFunc], a function which takes [`RunContext`][pydantic_ai.tools.RunContext] and a pre-built [`ToolDefinition`][pydantic_ai.tools.ToolDefinition], and should either return that `ToolDefinition` with or without modifying it, return a new `ToolDefinition`, or return `None` to indicate this tools should not be registered for that step. - -Here's a simple `prepare` method that only includes the tool if the value of the dependency is `42`. - -As with the previous example, we use [`TestModel`][pydantic_ai.models.test.TestModel] to demonstrate the behavior without calling a real model. - -```python {title="tool_only_if_42.py"} - -from pydantic_ai import Agent, RunContext, ToolDefinition - -agent = Agent('test') - - -async def only_if_42( - ctx: RunContext[int], tool_def: ToolDefinition -) -> ToolDefinition | None: - if ctx.deps == 42: - return tool_def - - -@agent.tool(prepare=only_if_42) -def hitchhiker(ctx: RunContext[int], answer: str) -> str: - return f'{ctx.deps} {answer}' - - -result = agent.run_sync('testing...', deps=41) -print(result.output) -#> success (no tool calls) -result = agent.run_sync('testing...', deps=42) -print(result.output) -#> {"hitchhiker":"42 a"} -``` - -_(This example is complete, it can be run "as is")_ - -Here's a more complex example where we change the description of the `name` parameter to based on the value of `deps` - -For the sake of variation, we create this tool using the [`Tool`][pydantic_ai.tools.Tool] dataclass. - -```python {title="customize_name.py"} -from __future__ import annotations - -from typing import Literal - -from pydantic_ai import Agent, RunContext, Tool, ToolDefinition -from pydantic_ai.models.test import TestModel - - -def greet(name: str) -> str: - return f'hello {name}' - - -async def prepare_greet( - ctx: RunContext[Literal['human', 'machine']], tool_def: ToolDefinition -) -> ToolDefinition | None: - d = f'Name of the {ctx.deps} to greet.' - tool_def.parameters_json_schema['properties']['name']['description'] = d - return tool_def - - -greet_tool = Tool(greet, prepare=prepare_greet) -test_model = TestModel() -agent = Agent(test_model, tools=[greet_tool], deps_type=Literal['human', 'machine']) - -result = agent.run_sync('testing...', deps='human') -print(result.output) -#> {"greet":"hello a"} -print(test_model.last_model_request_parameters.function_tools) -""" -[ - ToolDefinition( - name='greet', - parameters_json_schema={ - 'additionalProperties': False, - 'properties': { - 'name': {'type': 'string', 'description': 'Name of the human to greet.'} - }, - 'required': ['name'], - 'type': 'object', - }, - ) -] -""" -``` - -_(This example is complete, it can be run "as is")_ - -### Agent-wide Dynamic Tools {#prepare-tools} - -In addition to per-tool `prepare` methods, you can also define an agent-wide `prepare_tools` function. This function is called at each step of a run and allows you to filter or modify the list of all tool definitions available to the agent for that step. This is especially useful if you want to enable or disable multiple tools at once, or apply global logic based on the current context. - -The `prepare_tools` function should be of type [`ToolsPrepareFunc`][pydantic_ai.tools.ToolsPrepareFunc], which takes the [`RunContext`][pydantic_ai.tools.RunContext] and a list of [`ToolDefinition`][pydantic_ai.tools.ToolDefinition], and returns a new list of tool definitions (or `None` to disable all tools for that step). - -!!! note - The list of tool definitions passed to `prepare_tools` includes both regular function tools and tools from any [toolsets](toolsets.md) registered on the agent, but not [output tools](output.md#tool-output). -To modify output tools, you can set a `prepare_output_tools` function instead. - -Here's an example that makes all tools strict if the model is an OpenAI model: - -```python {title="agent_prepare_tools_customize.py" noqa="I001"} -from dataclasses import replace - -from pydantic_ai import Agent, RunContext, ToolDefinition -from pydantic_ai.models.test import TestModel - - -async def turn_on_strict_if_openai( - ctx: RunContext[None], tool_defs: list[ToolDefinition] -) -> list[ToolDefinition] | None: - if ctx.model.system == 'openai': - return [replace(tool_def, strict=True) for tool_def in tool_defs] - return tool_defs - - -test_model = TestModel() -agent = Agent(test_model, prepare_tools=turn_on_strict_if_openai) - - -@agent.tool_plain -def echo(message: str) -> str: - return message - - -agent.run_sync('testing...') -assert test_model.last_model_request_parameters.function_tools[0].strict is None - -# Set the system attribute of the test_model to 'openai' -test_model._system = 'openai' - -agent.run_sync('testing with openai...') -assert test_model.last_model_request_parameters.function_tools[0].strict -``` - -_(This example is complete, it can be run "as is")_ - -Here's another example that conditionally filters out the tools by name if the dependency (`ctx.deps`) is `True`: - -```python {title="agent_prepare_tools_filter_out.py" noqa="I001"} - -from pydantic_ai import Agent, RunContext, Tool, ToolDefinition - - -def launch_potato(target: str) -> str: - return f'Potato launched at {target}!' - - -async def filter_out_tools_by_name( - ctx: RunContext[bool], tool_defs: list[ToolDefinition] -) -> list[ToolDefinition] | None: - if ctx.deps: - return [tool_def for tool_def in tool_defs if tool_def.name != 'launch_potato'] - return tool_defs - - -agent = Agent( - 'test', - tools=[Tool(launch_potato)], - prepare_tools=filter_out_tools_by_name, - deps_type=bool, -) - -result = agent.run_sync('testing...', deps=False) -print(result.output) -#> {"launch_potato":"Potato launched at a!"} -result = agent.run_sync('testing...', deps=True) -print(result.output) -#> success (no tool calls) -``` - -_(This example is complete, it can be run "as is")_ - -You can use `prepare_tools` to: - -- Dynamically enable or disable tools based on the current model, dependencies, or other context -- Modify tool definitions globally (e.g., set all tools to strict mode, change descriptions, etc.) - -If both per-tool `prepare` and agent-wide `prepare_tools` are used, the per-tool `prepare` is applied first to each tool, and then `prepare_tools` is called with the resulting list of tool definitions. - -## Deferred Tools - -There are a few scenarios where the model should be able to call a tool that should not or cannot be executed during the same agent run inside the same Python process: - -- it may need to be approved by the user first -- it may depend on an upstream service, frontend, or user to provide the result -- the result could take longer to generate than it's reasonable to keep the agent process running - -To support these use cases, Pydantic AI provides the concept of deferred tools, which come in two flavors documented below: - -- tools that [require approval](#human-in-the-loop-tool-approval) -- tools that are [executed externally](#external-tool-execution) - -When the model calls a deferred tool, the agent run will end with a [`DeferredToolRequests`][pydantic_ai.output.DeferredToolRequests] output object containing information about the deferred tool calls. Once the approvals and/or results are ready, a new agent run can then be started with the original run's [message history](message-history.md) plus a [`DeferredToolResults`][pydantic_ai.tools.DeferredToolResults] object holding results for each tool call in `DeferredToolRequests`, which will continue the original run where it left off. - -Note that handling deferred tool calls requires `DeferredToolRequests` to be in the `Agent`'s [`output_type`](output.md#structured-output) so that the possible types of the agent run output are correctly inferred. If your agent can also be used in a context where no deferred tools are available and you don't want to deal with that type everywhere you use the agent, you can instead pass the `output_type` argument when you run the agent using [`agent.run()`][pydantic_ai.agent.AbstractAgent.run], [`agent.run_sync()`][pydantic_ai.agent.AbstractAgent.run_sync], [`agent.run_stream()`][pydantic_ai.agent.AbstractAgent.run_stream], or [`agent.iter()`][pydantic_ai.Agent.iter]. Note that the run-time `output_type` overrides the one specified at construction time (for type inference reasons), so you'll need to include the original output type explicitly. - -### Human-in-the-Loop Tool Approval - -If a tool function always requires approval, you can pass the `requires_approval=True` argument to the [`@agent.tool`][pydantic_ai.Agent.tool] decorator, [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] decorator, [`Tool`][pydantic_ai.tools.Tool] class, [`FunctionToolset.tool`][pydantic_ai.toolsets.FunctionToolset.tool] decorator, or [`FunctionToolset.add_function()`][pydantic_ai.toolsets.FunctionToolset.add_function] method. Inside the function, you can then assume that the tool call has been approved. - -If whether a tool function requires approval depends on the tool call arguments or the agent [run context][pydantic_ai.tools.RunContext] (e.g. [dependencies](dependencies.md) or message history), you can raise the [`ApprovalRequired`][pydantic_ai.exceptions.ApprovalRequired] exception from the tool function. The [`RunContext.tool_call_approved`][pydantic_ai.tools.RunContext.tool_call_approved] property will be `True` if the tool call has already been approved. - -To require approval for calls to tools provided by a [toolset](toolsets.md) (like an [MCP server](mcp/client.md)), see the [`ApprovalRequiredToolset` documentation](toolsets.md#requiring-tool-approval). - -When the model calls a tool that requires approval, the agent run will end with a [`DeferredToolRequests`][pydantic_ai.output.DeferredToolRequests] output object with an `approvals` list holding [`ToolCallPart`s][pydantic_ai.messages.ToolCallPart] containing the tool name, validated arguments, and a unique tool call ID. - -Once you've gathered the user's approvals or denials, you can build a [`DeferredToolResults`][pydantic_ai.tools.DeferredToolResults] object with an `approvals` dictionary that maps each tool call ID to a boolean, a [`ToolApproved`][pydantic_ai.tools.ToolApproved] object (with optional `override_args`), or a [`ToolDenied`][pydantic_ai.tools.ToolDenied] object (with an optional custom `message` to provide to the model). This `DeferredToolResults` object can then be provided to one of the agent run methods as `deferred_tool_results`, alongside the original run's [message history](message-history.md). - -Here's an example that shows how to require approval for all file deletions, and for updates of specific protected files: - -```python {title="tool_requires_approval.py"} -from pydantic_ai import ( - Agent, - ApprovalRequired, - DeferredToolRequests, - DeferredToolResults, - RunContext, - ToolDenied, -) - -agent = Agent('openai:gpt-5', output_type=[str, DeferredToolRequests]) - -PROTECTED_FILES = {'.env'} - - -@agent.tool -def update_file(ctx: RunContext, path: str, content: str) -> str: - if path in PROTECTED_FILES and not ctx.tool_call_approved: - raise ApprovalRequired - return f'File {path!r} updated: {content!r}' - - -@agent.tool_plain(requires_approval=True) -def delete_file(path: str) -> str: - return f'File {path!r} deleted' - - -result = agent.run_sync('Delete `__init__.py`, write `Hello, world!` to `README.md`, and clear `.env`') -messages = result.all_messages() - -assert isinstance(result.output, DeferredToolRequests) -requests = result.output -print(requests) -""" -DeferredToolRequests( - calls=[], - approvals=[ - ToolCallPart( - tool_name='update_file', - args={'path': '.env', 'content': ''}, - tool_call_id='update_file_dotenv', - ), - ToolCallPart( - tool_name='delete_file', - args={'path': '__init__.py'}, - tool_call_id='delete_file', - ), - ], -) -""" - -results = DeferredToolResults() -for call in requests.approvals: - result = False - if call.tool_name == 'update_file': - # Approve all updates - result = True - elif call.tool_name == 'delete_file': - # deny all deletes - result = ToolDenied('Deleting files is not allowed') - - results.approvals[call.tool_call_id] = result - -result = agent.run_sync(message_history=messages, deferred_tool_results=results) -print(result.output) -""" -I successfully updated `README.md` and cleared `.env`, but was not able to delete `__init__.py`. -""" -print(result.all_messages()) -""" -[ - ModelRequest( - parts=[ - UserPromptPart( - content='Delete `__init__.py`, write `Hello, world!` to `README.md`, and clear `.env`', - timestamp=datetime.datetime(...), - ) - ] - ), - ModelResponse( - parts=[ - ToolCallPart( - tool_name='delete_file', - args={'path': '__init__.py'}, - tool_call_id='delete_file', - ), - ToolCallPart( - tool_name='update_file', - args={'path': 'README.md', 'content': 'Hello, world!'}, - tool_call_id='update_file_readme', - ), - ToolCallPart( - tool_name='update_file', - args={'path': '.env', 'content': ''}, - tool_call_id='update_file_dotenv', - ), - ], - usage=RequestUsage(input_tokens=63, output_tokens=21), - model_name='gpt-5', - timestamp=datetime.datetime(...), - ), - ModelRequest( - parts=[ - ToolReturnPart( - tool_name='delete_file', - content='Deleting files is not allowed', - tool_call_id='delete_file', - timestamp=datetime.datetime(...), - ), - ToolReturnPart( - tool_name='update_file', - content="File 'README.md' updated: 'Hello, world!'", - tool_call_id='update_file_readme', - timestamp=datetime.datetime(...), - ), - ToolReturnPart( - tool_name='update_file', - content="File '.env' updated: ''", - tool_call_id='update_file_dotenv', - timestamp=datetime.datetime(...), - ), - ] - ), - ModelResponse( - parts=[ - TextPart( - content='I successfully updated `README.md` and cleared `.env`, but was not able to delete `__init__.py`.' - ) - ], - usage=RequestUsage(input_tokens=79, output_tokens=39), - model_name='gpt-5', - timestamp=datetime.datetime(...), - ), -] -""" -``` - -_(This example is complete, it can be run "as is")_ - -### External Tool Execution - -When the result of a tool call cannot be generated inside the same agent run in which it was called, the tool is considered to be external. -Examples of external tools are client-side tools implemented by a web or app frontend, and slow tasks that are passed off to a background worker or external service instead of keeping the agent process running. - -If whether a tool call should be executed externally depends on the tool call arguments, the agent [run context][pydantic_ai.tools.RunContext] (e.g. [dependencies](dependencies.md) or message history), or how long the task is expected to take, you can define a tool function and conditionally raise the [`CallDeferred`][pydantic_ai.exceptions.CallDeferred] exception. Before raising the exception, the tool function would typically schedule some background task and pass along the [`RunContext.tool_call_id`][pydantic_ai.tools.RunContext.tool_call_id] so that the result can be matched to the deferred tool call later. - -If a tool is always executed externally and its definition is provided to your code along with a JSON schema for its arguments, you can use an [`ExternalToolset`](toolsets.md#external-toolset). If the external tools are known up front and you don't have the arguments JSON schema handy, you can also define a tool function with the appropriate signature that does nothing but raise the [`CallDeferred`][pydantic_ai.exceptions.CallDeferred] exception. - -When the model calls an external tool, the agent run will end with a [`DeferredToolRequests`][pydantic_ai.output.DeferredToolRequests] output object with a `calls` list holding [`ToolCallPart`s][pydantic_ai.messages.ToolCallPart] containing the tool name, validated arguments, and a unique tool call ID. - -Once the tool call results are ready, you can build a [`DeferredToolResults`][pydantic_ai.tools.DeferredToolResults] object with a `calls` dictionary that maps each tool call ID to an arbitrary value to be returned to the model, a [`ToolReturn`](#advanced-tool-returns) object, or a [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] exception in case the tool call failed and the model should [try again](#tool-retries). This `DeferredToolResults` object can then be provided to one of the agent run methods as `deferred_tool_results`, alongside the original run's [message history](message-history.md). - -Here's an example that shows how to move a task that takes a while to complete to the background and return the result to the model once the task is complete: - -```python {title="external_tool.py"} -import asyncio -from dataclasses import dataclass -from typing import Any - -from pydantic_ai import ( - Agent, - CallDeferred, - DeferredToolRequests, - DeferredToolResults, - ModelRetry, - RunContext, -) - - -@dataclass -class TaskResult: - tool_call_id: str - result: Any - - -async def calculate_answer_task(tool_call_id: str, question: str) -> TaskResult: - await asyncio.sleep(1) - return TaskResult(tool_call_id=tool_call_id, result=42) +## See Also -agent = Agent('openai:gpt-5', output_type=[str, DeferredToolRequests]) +For more tool features and integrations, see: -tasks: list[asyncio.Task[TaskResult]] = [] - - -@agent.tool -async def calculate_answer(ctx: RunContext, question: str) -> str: - assert ctx.tool_call_id is not None - - task = asyncio.create_task(calculate_answer_task(ctx.tool_call_id, question)) # (1)! - tasks.append(task) - - raise CallDeferred - - -async def main(): - result = await agent.run('Calculate the answer to the ultimate question of life, the universe, and everything') - messages = result.all_messages() - - assert isinstance(result.output, DeferredToolRequests) - requests = result.output - print(requests) - """ - DeferredToolRequests( - calls=[ - ToolCallPart( - tool_name='calculate_answer', - args={ - 'question': 'the ultimate question of life, the universe, and everything' - }, - tool_call_id='pyd_ai_tool_call_id', - ) - ], - approvals=[], - ) - """ - - done, _ = await asyncio.wait(tasks) # (2)! - task_results = [task.result() for task in done] - task_results_by_tool_call_id = {result.tool_call_id: result.result for result in task_results} - - results = DeferredToolResults() - for call in requests.calls: - try: - result = task_results_by_tool_call_id[call.tool_call_id] - except KeyError: - result = ModelRetry('No result for this tool call was found.') - - results.calls[call.tool_call_id] = result - - result = await agent.run(message_history=messages, deferred_tool_results=results) - print(result.output) - #> The answer to the ultimate question of life, the universe, and everything is 42. - print(result.all_messages()) - """ - [ - ModelRequest( - parts=[ - UserPromptPart( - content='Calculate the answer to the ultimate question of life, the universe, and everything', - timestamp=datetime.datetime(...), - ) - ] - ), - ModelResponse( - parts=[ - ToolCallPart( - tool_name='calculate_answer', - args={ - 'question': 'the ultimate question of life, the universe, and everything' - }, - tool_call_id='pyd_ai_tool_call_id', - ) - ], - usage=RequestUsage(input_tokens=63, output_tokens=13), - model_name='gpt-5', - timestamp=datetime.datetime(...), - ), - ModelRequest( - parts=[ - ToolReturnPart( - tool_name='calculate_answer', - content=42, - tool_call_id='pyd_ai_tool_call_id', - timestamp=datetime.datetime(...), - ) - ] - ), - ModelResponse( - parts=[ - TextPart( - content='The answer to the ultimate question of life, the universe, and everything is 42.' - ) - ], - usage=RequestUsage(input_tokens=64, output_tokens=28), - model_name='gpt-5', - timestamp=datetime.datetime(...), - ), - ] - """ -``` - -1. In reality, you'd likely use Celery or a similar task queue to run the task in the background. -2. In reality, this would typically happen in a separate process that polls for the task status or is notified when all pending tasks are complete. - -_(This example is complete, it can be run "as is" — you'll need to add `asyncio.run(main())` to run `main`)_ - -## Tool Execution and Retries {#tool-retries} - -When a tool is executed, its arguments (provided by the LLM) are first validated against the function's signature using Pydantic. If validation fails (e.g., due to incorrect types or missing required arguments), a `ValidationError` is raised, and the framework automatically generates a [`RetryPromptPart`][pydantic_ai.messages.RetryPromptPart] containing the validation details. This prompt is sent back to the LLM, informing it of the error and allowing it to correct the parameters and retry the tool call. - -Beyond automatic validation errors, the tool's own internal logic can also explicitly request a retry by raising the [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] exception. This is useful for situations where the parameters were technically valid, but an issue occurred during execution (like a transient network error, or the tool determining the initial attempt needs modification). - -```python -from pydantic_ai import ModelRetry - - -def my_flaky_tool(query: str) -> str: - if query == 'bad': - # Tell the LLM the query was bad and it should try again - raise ModelRetry("The query 'bad' is not allowed. Please provide a different query.") - # ... process query ... - return 'Success!' -``` - -Raising `ModelRetry` also generates a `RetryPromptPart` containing the exception message, which is sent back to the LLM to guide its next attempt. Both `ValidationError` and `ModelRetry` respect the `retries` setting configured on the `Tool` or `Agent`. - -### Parallel tool calls & concurrency - -When a model returns multiple tool calls in one response, Pydantic AI schedules them concurrently using `asyncio.create_task`. - -Async functions are run on the event loop, while sync functions are offloaded to threads. To get the best performance, _always_ use an async function _unless_ you're doing blocking I/O (and there's no way to use a non-blocking library instead) or CPU-bound work (like `numpy` or `scikit-learn` operations), so that simple functions are not offloaded to threads unnecessarily. - -## Third-Party Tools - -### MCP Tools {#mcp-tools} - -See the [MCP Client](./mcp/client.md) documentation for how to use MCP servers with Pydantic AI as [toolsets](toolsets.md). - -### LangChain Tools {#langchain-tools} - -If you'd like to use a tool from LangChain's [community tool library](https://python.langchain.com/docs/integrations/tools/) with Pydantic AI, you can use the [`tool_from_langchain`][pydantic_ai.ext.langchain.tool_from_langchain] convenience method. Note that Pydantic AI will not validate the arguments in this case -- it's up to the model to provide arguments matching the schema specified by the LangChain tool, and up to the LangChain tool to raise an error if the arguments are invalid. - -You will need to install the `langchain-community` package and any others required by the tool in question. - -Here is how you can use the LangChain `DuckDuckGoSearchRun` tool, which requires the `ddgs` package: - -```python {test="skip"} -from langchain_community.tools import DuckDuckGoSearchRun - -from pydantic_ai import Agent -from pydantic_ai.ext.langchain import tool_from_langchain - -search = DuckDuckGoSearchRun() -search_tool = tool_from_langchain(search) - -agent = Agent( - 'google-gla:gemini-2.0-flash', - tools=[search_tool], -) - -result = agent.run_sync('What is the release date of Elden Ring Nightreign?') # (1)! -print(result.output) -#> Elden Ring Nightreign is planned to be released on May 30, 2025. -``` - -1. The release date of this game is the 30th of May 2025, which is after the knowledge cutoff for Gemini 2.0 (August 2024). - -If you'd like to use multiple LangChain tools or a LangChain [toolkit](https://python.langchain.com/docs/concepts/tools/#toolkits), you can use the [`LangChainToolset`][pydantic_ai.ext.langchain.LangChainToolset] [toolset](toolsets.md) which takes a list of LangChain tools: - -```python {test="skip"} -from langchain_community.agent_toolkits import SlackToolkit - -from pydantic_ai import Agent -from pydantic_ai.ext.langchain import LangChainToolset - -toolkit = SlackToolkit() -toolset = LangChainToolset(toolkit.get_tools()) - -agent = Agent('openai:gpt-4o', toolsets=[toolset]) -# ... -``` - -### ACI.dev Tools {#aci-tools} - -If you'd like to use a tool from the [ACI.dev tool library](https://www.aci.dev/tools) with Pydantic AI, you can use the [`tool_from_aci`][pydantic_ai.ext.aci.tool_from_aci] convenience method. Note that Pydantic AI will not validate the arguments in this case -- it's up to the model to provide arguments matching the schema specified by the ACI tool, and up to the ACI tool to raise an error if the arguments are invalid. - -You will need to install the `aci-sdk` package, set your ACI API key in the `ACI_API_KEY` environment variable, and pass your ACI "linked account owner ID" to the function. - -Here is how you can use the ACI.dev `TAVILY__SEARCH` tool: - -```python {test="skip"} -import os - -from pydantic_ai import Agent -from pydantic_ai.ext.aci import tool_from_aci - -tavily_search = tool_from_aci( - 'TAVILY__SEARCH', - linked_account_owner_id=os.getenv('LINKED_ACCOUNT_OWNER_ID'), -) - -agent = Agent( - 'google-gla:gemini-2.0-flash', - tools=[tavily_search], -) - -result = agent.run_sync('What is the release date of Elden Ring Nightreign?') # (1)! -print(result.output) -#> Elden Ring Nightreign is planned to be released on May 30, 2025. -``` - -1. The release date of this game is the 30th of May 2025, which is after the knowledge cutoff for Gemini 2.0 (August 2024). - -If you'd like to use multiple ACI.dev tools, you can use the [`ACIToolset`][pydantic_ai.ext.aci.ACIToolset] [toolset](toolsets.md) which takes a list of ACI tool names as well as the `linked_account_owner_id`: - -```python {test="skip"} -import os - -from pydantic_ai import Agent -from pydantic_ai.ext.aci import ACIToolset - -toolset = ACIToolset( - [ - 'OPEN_WEATHER_MAP__CURRENT_WEATHER', - 'OPEN_WEATHER_MAP__FORECAST', - ], - linked_account_owner_id=os.getenv('LINKED_ACCOUNT_OWNER_ID'), -) - -agent = Agent('openai:gpt-4o', toolsets=[toolset]) -``` +- [Advanced Tool Features](tools-advanced.md) - Custom schemas, dynamic tools, tool execution and retries +- [Toolsets](toolsets.md) - Managing collections of tools +- [Builtin Tools](builtin-tools.md) - Native tools provided by LLM providers +- [Common Tools](common-tools.md) - Ready-to-use tool implementations +- [Third-Party Tools](third-party-tools.md) - Integrations with MCP, LangChain, ACI.dev and other tool libraries +- [Deferred Tools](deferred-tools.md) - Tools requiring approval or external execution diff --git a/docs/toolsets.md b/docs/toolsets.md index f68d8cd4f8..7c1bce84d6 100644 --- a/docs/toolsets.md +++ b/docs/toolsets.md @@ -243,7 +243,7 @@ _(This example is complete, it can be run "as is")_ [`PreparedToolset`][pydantic_ai.toolsets.PreparedToolset] lets you modify the entire list of available tools ahead of each step of the agent run using a user-defined function that takes the agent [run context][pydantic_ai.tools.RunContext] and a list of [`ToolDefinition`s][pydantic_ai.tools.ToolDefinition] and returns a list of modified `ToolDefinition`s. -This is the toolset-specific equivalent of the [`prepare_tools`](tools.md#prepare-tools) argument to `Agent` that prepares all tool definitions registered on an agent across toolsets. +This is the toolset-specific equivalent of the [`prepare_tools`](tools-advanced.md#prepare-tools) argument to `Agent` that prepares all tool definitions registered on an agent across toolsets. Note that it is not possible to add or rename tools using `PreparedToolset`. Instead, you can use [`FunctionToolset.add_function()`](#function-toolset) or [`RenamedToolset`](#renaming-tools). @@ -328,11 +328,11 @@ print(test_model.last_model_request_parameters.function_tools) ### Requiring Tool Approval -[`ApprovalRequiredToolset`][pydantic_ai.toolsets.ApprovalRequiredToolset] wraps a toolset and lets you dynamically [require approval](tools.md#human-in-the-loop-tool-approval) for a given tool call based on a user-defined function that is passed the agent [run context][pydantic_ai.tools.RunContext], the tool's [`ToolDefinition`][pydantic_ai.tools.ToolDefinition], and the validated tool call arguments. If no function is provided, all tool calls will require approval. +[`ApprovalRequiredToolset`][pydantic_ai.toolsets.ApprovalRequiredToolset] wraps a toolset and lets you dynamically [require approval](deferred-tools.md#human-in-the-loop-tool-approval) for a given tool call based on a user-defined function that is passed the agent [run context][pydantic_ai.tools.RunContext], the tool's [`ToolDefinition`][pydantic_ai.tools.ToolDefinition], and the validated tool call arguments. If no function is provided, all tool calls will require approval. To easily chain different modifications, you can also call [`approval_required()`][pydantic_ai.toolsets.AbstractToolset.approval_required] on any toolset instead of directly constructing a `ApprovalRequiredToolset`. -See the [Human-in-the-Loop Tool Approval](tools.md#human-in-the-loop-tool-approval) documentation for more information on how to handle agent runs that call tools that require approval and how to pass in the results. +See the [Human-in-the-Loop Tool Approval](deferred-tools.md#human-in-the-loop-tool-approval) documentation for more information on how to handle agent runs that call tools that require approval and how to pass in the results. ```python {title="approval_required_toolset.py" requires="function_toolset.py,combined_toolset.py,renamed_toolset.py,prepared_toolset.py"} from pydantic_ai import Agent, DeferredToolRequests, DeferredToolResults @@ -446,13 +446,13 @@ _(This example is complete, it can be run "as is")_ ## External Toolset -If your agent needs to be able to call [external tools](tools.md#external-tool-execution) that are provided and executed by an upstream service or frontend, you can build an [`ExternalToolset`][pydantic_ai.toolsets.ExternalToolset] from a list of [`ToolDefinition`s][pydantic_ai.tools.ToolDefinition] containing the tool names, arguments JSON schemas, and descriptions. +If your agent needs to be able to call [external tools](deferred-tools.md#external-tool-execution) that are provided and executed by an upstream service or frontend, you can build an [`ExternalToolset`][pydantic_ai.toolsets.ExternalToolset] from a list of [`ToolDefinition`s][pydantic_ai.tools.ToolDefinition] containing the tool names, arguments JSON schemas, and descriptions. -When the model calls an external tool, the call is considered to be ["deferred"](tools.md#deferred-tools), and the agent run will end with a [`DeferredToolRequests`][pydantic_ai.output.DeferredToolRequests] output object with a `calls` list holding [`ToolCallPart`s][pydantic_ai.messages.ToolCallPart] containing the tool name, validated arguments, and a unique tool call ID, which are expected to be passed to the upstream service or frontend that will produce the results. +When the model calls an external tool, the call is considered to be ["deferred"](deferred-tools.md#deferred-tools), and the agent run will end with a [`DeferredToolRequests`][pydantic_ai.output.DeferredToolRequests] output object with a `calls` list holding [`ToolCallPart`s][pydantic_ai.messages.ToolCallPart] containing the tool name, validated arguments, and a unique tool call ID, which are expected to be passed to the upstream service or frontend that will produce the results. -When the tool call results are received from the upstream service or frontend, you can build a [`DeferredToolResults`][pydantic_ai.tools.DeferredToolResults] object with a `calls` dictionary that maps each tool call ID to an arbitrary value to be returned to the model, a [`ToolReturn`](tools.md#advanced-tool-returns) object, or a [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] exception in case the tool call failed and the model should [try again](tools.md#tool-retries). This `DeferredToolResults` object can then be provided to one of the agent run methods as `deferred_tool_results`, alongside the original run's [message history](message-history.md). +When the tool call results are received from the upstream service or frontend, you can build a [`DeferredToolResults`][pydantic_ai.tools.DeferredToolResults] object with a `calls` dictionary that maps each tool call ID to an arbitrary value to be returned to the model, a [`ToolReturn`](tools-advanced.md#advanced-tool-returns) object, or a [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] exception in case the tool call failed and the model should [try again](tools-advanced.md#tool-retries). This `DeferredToolResults` object can then be provided to one of the agent run methods as `deferred_tool_results`, alongside the original run's [message history](message-history.md). -Note that you need to add `DeferredToolRequests` to the `Agent`'s or `agent.run()`'s [`output_type`](output.md#structured-output) so that the possible types of the agent run output are correctly inferred. For more information, see the [Deferred Tools](tools.md#deferred-tools) documentation. +Note that you need to add `DeferredToolRequests` to the `Agent`'s or `agent.run()`'s [`output_type`](output.md#structured-output) so that the possible types of the agent run output are correctly inferred. For more information, see the [Deferred Tools](deferred-tools.md#deferred-tools) documentation. To demonstrate, let us first define a simple agent _without_ deferred tools: @@ -512,8 +512,8 @@ def run_agent( return result.output, result.new_messages() ``` -1. As mentioned in the [Deferred Tools](tools.md#deferred-tools) documentation, these `toolsets` are additional to those provided to the `Agent` constructor -2. As mentioned in the [Deferred Tools](tools.md#deferred-tools) documentation, this `output_type` overrides the one provided to the `Agent` constructor, so we have to make sure to not lose it +1. As mentioned in the [Deferred Tools](deferred-tools.md#deferred-tools) documentation, these `toolsets` are additional to those provided to the `Agent` constructor +2. As mentioned in the [Deferred Tools](deferred-tools.md#deferred-tools) documentation, this `output_type` overrides the one provided to the `Agent` constructor, so we have to make sure to not lose it 3. We don't include an `user_prompt` keyword argument as we expect the frontend to provide it via `messages` Now, imagine that the code below is implemented on the frontend, and `run_agent` stands in for an API call to the backend that runs the agent. This is where we actually execute the deferred tool calls and start a new run with the new result included: diff --git a/mkdocs.yml b/mkdocs.yml index 13ddbcb424..22bb799edb 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -35,9 +35,13 @@ nav: - models/mistral.md - models/huggingface.md - Tools & Toolsets: + - tools.md + - tools-advanced.md - toolsets.md + - deferred-tools.md - builtin-tools.md - common-tools.md + - third-party-tools.md - Advanced Features: - input.md - thinking.md diff --git a/pydantic_ai_slim/pydantic_ai/agent/__init__.py b/pydantic_ai_slim/pydantic_ai/agent/__init__.py index 273c1ef944..c6ab745ad8 100644 --- a/pydantic_ai_slim/pydantic_ai/agent/__init__.py +++ b/pydantic_ai_slim/pydantic_ai/agent/__init__.py @@ -1066,7 +1066,7 @@ async def spam(ctx: RunContext[str], y: float) -> float: strict: Whether to enforce JSON schema compliance (only affects OpenAI). See [`ToolDefinition`][pydantic_ai.tools.ToolDefinition] for more info. requires_approval: Whether this tool requires human-in-the-loop approval. Defaults to False. - See the [tools documentation](../tools.md#human-in-the-loop-tool-approval) for more info. + See the [tools documentation](../deferred-tools.md#human-in-the-loop-tool-approval) for more info. """ def tool_decorator( @@ -1165,7 +1165,7 @@ async def spam(ctx: RunContext[str]) -> float: strict: Whether to enforce JSON schema compliance (only affects OpenAI). See [`ToolDefinition`][pydantic_ai.tools.ToolDefinition] for more info. requires_approval: Whether this tool requires human-in-the-loop approval. Defaults to False. - See the [tools documentation](../tools.md#human-in-the-loop-tool-approval) for more info. + See the [tools documentation](../deferred-tools.md#human-in-the-loop-tool-approval) for more info. """ def tool_decorator(func_: ToolFuncPlain[ToolParams]) -> ToolFuncPlain[ToolParams]: diff --git a/pydantic_ai_slim/pydantic_ai/exceptions.py b/pydantic_ai_slim/pydantic_ai/exceptions.py index 4e200f280f..58a7686e06 100644 --- a/pydantic_ai_slim/pydantic_ai/exceptions.py +++ b/pydantic_ai_slim/pydantic_ai/exceptions.py @@ -65,7 +65,7 @@ def __get_pydantic_core_schema__(cls, _: Any, __: Any) -> core_schema.CoreSchema class CallDeferred(Exception): """Exception to raise when a tool call should be deferred. - See [tools docs](../tools.md#deferred-tools) for more information. + See [tools docs](../deferred-tools.md#deferred-tools) for more information. """ pass @@ -74,7 +74,7 @@ class CallDeferred(Exception): class ApprovalRequired(Exception): """Exception to raise when a tool call requires human-in-the-loop approval. - See [tools docs](../tools.md#human-in-the-loop-tool-approval) for more information. + See [tools docs](../deferred-tools.md#human-in-the-loop-tool-approval) for more information. """ pass diff --git a/pydantic_ai_slim/pydantic_ai/tools.py b/pydantic_ai_slim/pydantic_ai/tools.py index 401e758a48..0e11f8055d 100644 --- a/pydantic_ai_slim/pydantic_ai/tools.py +++ b/pydantic_ai_slim/pydantic_ai/tools.py @@ -70,7 +70,7 @@ ToolPrepareFunc: TypeAlias = Callable[[RunContext[AgentDepsT], 'ToolDefinition'], Awaitable['ToolDefinition | None']] """Definition of a function that can prepare a tool definition at call time. -See [tool docs](../tools.md#tool-prepare) for more information. +See [tool docs](../tools-advanced.md#tool-prepare) for more information. Example — here `only_if_42` is valid as a `ToolPrepareFunc`: @@ -140,7 +140,7 @@ class DeferredToolRequests: Results can be passed to the next agent run using a [`DeferredToolResults`][pydantic_ai.tools.DeferredToolResults] object with the same tool call IDs. - See [deferred tools docs](../tools.md#deferred-tools) for more information. + See [deferred tools docs](../deferred-tools.md#deferred-tools) for more information. """ calls: list[ToolCallPart] = field(default_factory=list) @@ -204,7 +204,7 @@ class DeferredToolResults: The tool call IDs need to match those from the [`DeferredToolRequests`][pydantic_ai.output.DeferredToolRequests] output object from the previous run. - See [deferred tools docs](../tools.md#deferred-tools) for more information. + See [deferred tools docs](../deferred-tools.md#deferred-tools) for more information. """ calls: dict[str, DeferredToolCallResult | Any] = field(default_factory=dict) @@ -328,7 +328,7 @@ async def prep_my_tool( strict: Whether to enforce JSON schema compliance (only affects OpenAI). See [`ToolDefinition`][pydantic_ai.tools.ToolDefinition] for more info. requires_approval: Whether this tool requires human-in-the-loop approval. Defaults to False. - See the [tools documentation](../tools.md#human-in-the-loop-tool-approval) for more info. + See the [tools documentation](../deferred-tools.md#human-in-the-loop-tool-approval) for more info. function_schema: The function schema to use for the tool. If not provided, it will be generated. """ self.function = function @@ -472,16 +472,16 @@ class ToolDefinition: - `'function'`: a tool that will be executed by Pydantic AI during an agent run and has its result returned to the model - `'output'`: a tool that passes through an output value that ends the run - `'external'`: a tool whose result will be produced outside of the Pydantic AI agent run in which it was called, because it depends on an upstream service (or user) or could take longer to generate than it's reasonable to keep the agent process running. - See the [tools documentation](../tools.md#deferred-tools) for more info. + See the [tools documentation](../deferred-tools.md#deferred-tools) for more info. - `'unapproved'`: a tool that requires human-in-the-loop approval. - See the [tools documentation](../tools.md#human-in-the-loop-tool-approval) for more info. + See the [tools documentation](../deferred-tools.md#human-in-the-loop-tool-approval) for more info. """ @property def defer(self) -> bool: """Whether calls to this tool will be deferred. - See the [tools documentation](../tools.md#deferred-tools) for more info. + See the [tools documentation](../deferred-tools.md#deferred-tools) for more info. """ return self.kind in ('external', 'unapproved') diff --git a/pydantic_ai_slim/pydantic_ai/toolsets/function.py b/pydantic_ai_slim/pydantic_ai/toolsets/function.py index ff015f42da..6a467808d6 100644 --- a/pydantic_ai_slim/pydantic_ai/toolsets/function.py +++ b/pydantic_ai_slim/pydantic_ai/toolsets/function.py @@ -162,7 +162,7 @@ async def spam(ctx: RunContext[str], y: float) -> float: strict: Whether to enforce JSON schema compliance (only affects OpenAI). See [`ToolDefinition`][pydantic_ai.tools.ToolDefinition] for more info. requires_approval: Whether this tool requires human-in-the-loop approval. Defaults to False. - See the [tools documentation](../tools.md#human-in-the-loop-tool-approval) for more info. + See the [tools documentation](../deferred-tools.md#human-in-the-loop-tool-approval) for more info. """ def tool_decorator( @@ -223,7 +223,7 @@ def add_function( strict: Whether to enforce JSON schema compliance (only affects OpenAI). See [`ToolDefinition`][pydantic_ai.tools.ToolDefinition] for more info. requires_approval: Whether this tool requires human-in-the-loop approval. Defaults to False. - See the [tools documentation](../tools.md#human-in-the-loop-tool-approval) for more info. + See the [tools documentation](../deferred-tools.md#human-in-the-loop-tool-approval) for more info. """ if docstring_format is None: docstring_format = self.docstring_format