You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/builtin-tools.md
+84Lines changed: 84 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,7 @@ Pydantic AI supports the following builtin tools:
9
9
-**[`WebSearchTool`][pydantic_ai.builtin_tools.WebSearchTool]**: Allows agents to search the web
10
10
-**[`CodeExecutionTool`][pydantic_ai.builtin_tools.CodeExecutionTool]**: Enables agents to execute code in a secure environment
11
11
-**[`UrlContextTool`][pydantic_ai.builtin_tools.UrlContextTool]**: Enables agents to pull URL contents into their context
12
+
-**[`MemoryTool`][pydantic_ai.builtin_tools.MemoryTool]**: Enables agents to use memory
12
13
13
14
These tools are passed to the agent via the `builtin_tools` parameter and are executed by the model provider's infrastructure.
14
15
@@ -160,6 +161,89 @@ result = agent.run_sync('What is this? https://ai.pydantic.dev')
160
161
# > A Python agent framework for building Generative AI applications.
161
162
```
162
163
164
+
## Memory Tool
165
+
166
+
The [`MemoryTool`][pydantic_ai.builtin_tools.MemoryTool] enables your agent to use memory.
167
+
168
+
### Provider Support
169
+
170
+
| Provider | Supported | Notes |
171
+
|----------|-----------|-------|
172
+
| Anthropic | ✅ | Requires a tool named `memory` to be defined that implements [specific sub-commands](https://docs.claude.com/en/docs/agents-and-tools/tool-use/memory-tool#tool-commands). You can use a subclass of [`anthropic.lib.tools.BetaAbstractMemoryTool`](https://github.com/anthropics/anthropic-sdk-python/blob/main/src/anthropic/lib/tools/_beta_builtin_memory_tool.py) as documented below. |
173
+
| Google | ❌ ||
174
+
| OpenAI | ❌ ||
175
+
| Groq | ❌ ||
176
+
| Bedrock | ❌ ||
177
+
| Mistral | ❌ ||
178
+
| Cohere | ❌ ||
179
+
| HuggingFace | ❌ ||
180
+
181
+
### Usage
182
+
183
+
The Anthropic SDK provides an abstract [`BetaAbstractMemoryTool`](https://github.com/anthropics/anthropic-sdk-python/blob/main/src/anthropic/lib/tools/_beta_builtin_memory_tool.py) class that you can subclass to create your own memory storage solution (e.g., database, cloud storage, encrypted files, etc.). Their [`LocalFilesystemMemoryTool`](https://github.com/anthropics/anthropic-sdk-python/blob/main/examples/memory/basic.py) example can serve as a starting point.
184
+
185
+
The following example uses a subclass that hard-codes a specific memory. The bits specific to Pydantic AI are the `MemoryTool` built-in tool and the `memory` tool definition that forwards commands to the `call` method of the `BetaAbstractMemoryTool` subclass.
186
+
187
+
```py title="anthropic_memory.py"
188
+
from typing import Any
189
+
190
+
from anthropic.lib.tools import BetaAbstractMemoryTool
Copy file name to clipboardExpand all lines: docs/cli.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -114,8 +114,8 @@ _(You'll need to add `asyncio.run(main())` to run `main`)_
114
114
Both `Agent.to_cli()` and `Agent.to_cli_sync()` support a `message_history` parameter, allowing you to continue an existing conversation or provide conversation context:
Copy file name to clipboardExpand all lines: docs/durable_execution/dbos.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -123,7 +123,7 @@ Other than that, any agent and toolset will just work!
123
123
124
124
### Agent Run Context and Dependencies
125
125
126
-
DBOS checkpoints workflow inputs/outputs and step outputs into a database using `jsonpickle`. This means you need to make sure [dependencies](../dependencies.md) object provided to [`DBOSAgent.run()`][pydantic_ai.durable_exec.dbos.DBOSAgent.run] or [`DBOSAgent.run_sync()`][pydantic_ai.durable_exec.dbos.DBOSAgent.run_sync], and tool outputs can be serialized using jsonpickle. You may also want to keep the inputs and outputs small (under \~2 MB). PostgreSQL and SQLite support up to 1 GB per field, but large objects may impact performance.
126
+
DBOS checkpoints workflow inputs/outputs and step outputs into a database using [`pickle`](https://docs.python.org/3/library/pickle.html). This means you need to make sure [dependencies](../dependencies.md) object provided to [`DBOSAgent.run()`][pydantic_ai.durable_exec.dbos.DBOSAgent.run] or [`DBOSAgent.run_sync()`][pydantic_ai.durable_exec.dbos.DBOSAgent.run_sync], and tool outputs can be serialized using pickle. You may also want to keep the inputs and outputs small (under \~2 MB). PostgreSQL and SQLite support up to 1 GB per field, but large objects may impact performance.
127
127
128
128
### Streaming
129
129
@@ -153,6 +153,6 @@ You can customize DBOS's retry policy using [step configuration](#step-configura
153
153
154
154
## Observability with Logfire
155
155
156
-
DBOS automatically generates OpenTelemetry spans for each workflow and step execution, and Pydantic AI emits spans for each agent run, model request, and tool invocation. You can send these spans to [Pydantic Logfire](../logfire.md) to get a full, end-to-end view of what's happening in your application.
156
+
DBOS can be configured to generate OpenTelemetry spans for each workflow and step execution, and Pydantic AI emits spans for each agent run, model request, and tool invocation. You can send these spans to [Pydantic Logfire](../logfire.md) to get a full, end-to-end view of what's happening in your application.
157
157
158
158
For more information about DBOS logging and tracing, please see the [DBOS docs](https://docs.dbos.dev/python/tutorials/logging-and-tracing) for details.
Pydantic AI comes with two ways to connect to MCP servers:
16
+
Pydantic AI comes with three ways to connect to MCP servers:
17
17
18
18
-[`MCPServerStreamableHTTP`][pydantic_ai.mcp.MCPServerStreamableHTTP] which connects to an MCP server using the [Streamable HTTP](https://modelcontextprotocol.io/introduction#streamable-http) transport
19
19
-[`MCPServerSSE`][pydantic_ai.mcp.MCPServerSSE] which connects to an MCP server using the [HTTP SSE](https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/transports/#http-with-sse) transport
@@ -72,14 +72,14 @@ _(This example is complete, it can be run "as is" — you'll need to add `asynci
72
72
73
73
**What's happening here?**
74
74
75
-
- The model is receiving the prompt "how many days between 2000-01-01 and 2025-03-18?"
76
-
- The model decides "Oh, I've got this `run_python_code` tool, that will be a good way to answer this question", and writes some python code to calculate the answer.
75
+
- The model receives the prompt "What is 7 plus 5?"
76
+
- The model decides "Oh, I've got this `add` tool, that will be a good way to answer this question"
77
77
- The model returns a tool call
78
-
- Pydantic AI sends the tool call to the MCP server using the SSE transport
79
-
- The model is called again with the return value of running the code
78
+
- Pydantic AI sends the tool call to the MCP server using the Streamable HTTP transport
79
+
- The model is called again with the return value of running the `add` tool (12)
80
80
- The model returns the final answer
81
81
82
-
You can visualise this clearly, and even see the code that's run by adding three lines of code to instrument the example with [logfire](https://logfire.pydantic.dev/docs):
82
+
You can visualise this clearly, and even see the tool call, by adding three lines of code to instrument the example with [logfire](https://logfire.pydantic.dev/docs):

94
-
95
91
### SSE Client
96
92
97
93
[`MCPServerSSE`][pydantic_ai.mcp.MCPServerSSE] connects over HTTP using the [HTTP + Server Sent Events transport](https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/transports/#http-with-sse) to a server.
@@ -216,10 +212,10 @@ async def main():
216
212
217
213
_(This example is complete, it can be run "as is" — you'll need to add `asyncio.run(main())` to run `main`)_
218
214
219
-
## Tool call customisation
215
+
## Tool call customization
220
216
221
217
The MCP servers provide the ability to set a `process_tool_call` which allows
222
-
the customisation of tool call requests and their responses.
218
+
the customization of tool call requests and their responses.
223
219
224
220
A common use case for this is to inject metadata to the requests which the server
0 commit comments