Skip to content

Commit 1f0e225

Browse files
authored
Rename retreiver to tool (#96)
1 parent c9c65b3 commit 1f0e225

37 files changed

+315
-320
lines changed

docs/agents.md

Lines changed: 44 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -8,10 +8,10 @@ but multiple agents can also interact to embody more complex workflows.
88
The [`Agent`][pydantic_ai.Agent] class has full API documentation, but conceptually you can think of an agent as a container for:
99

1010
* A [system prompt](#system-prompts) — a set of instructions for the LLM written by the developer
11-
* One or more [retrievers](#retrievers) — functions that the LLM may call to get information while generating a response
11+
* One or more [retrieval tool](#tools) — functions that the LLM may call to get information while generating a response
1212
* An optional structured [result type](results.md) — the structured datatype the LLM must return at the end of a run
13-
* A [dependency](dependencies.md) type constraint — system prompt functions, retrievers and result validators may all use dependencies when they're run
14-
* Agents may optionally also have a default [model](api/models/base.md) associated with them; the model to use can also be specified when running the agent
13+
* A [dependency](dependencies.md) type constraint — system prompt functions, tools and result validators may all use dependencies when they're run
14+
* Agents may optionally also have a default [LLM model](api/models/base.md) associated with them; the model to use can also be specified when running the agent
1515

1616
In typing terms, agents are generic in their dependency and result types, e.g., an agent which required dependencies of type `#!python Foobar` and returned results of type `#!python list[str]` would have type `#!python Agent[Foobar, list[str]]`.
1717

@@ -31,7 +31,7 @@ roulette_agent = Agent( # (1)!
3131
)
3232

3333

34-
@roulette_agent.retriever
34+
@roulette_agent.tool
3535
async def roulette_wheel(ctx: CallContext[int], square: int) -> str: # (2)!
3636
"""check if the square is a winner"""
3737
return 'winner' if square == ctx.deps else 'loser'
@@ -49,7 +49,7 @@ print(result.data)
4949
```
5050

5151
1. Create an agent, which expects an integer dependency and returns a boolean result. This agent will have type `#!python Agent[int, bool]`.
52-
2. Define a retriever that checks if the square is a winner. Here [`CallContext`][pydantic_ai.dependencies.CallContext] is parameterized with the dependency type `int`; if you got the dependency type wrong you'd get a typing error.
52+
2. Define a tool that checks if the square is a winner. Here [`CallContext`][pydantic_ai.dependencies.CallContext] is parameterized with the dependency type `int`; if you got the dependency type wrong you'd get a typing error.
5353
3. In reality, you might want to use a random number here e.g. `random.randint(0, 36)`.
5454
4. `result.data` will be a boolean indicating if the square is a winner. Pydantic performs the result validation, it'll be typed as a `bool` since its type is derived from the `result_type` generic parameter of the agent.
5555

@@ -166,23 +166,23 @@ print(result.data)
166166

167167
_(This example is complete, it can be run "as is")_
168168

169-
## Retrievers
169+
## Function Tools
170170

171-
Retrievers provide a mechanism for models to request extra information to help them generate a response.
171+
Function tools provide a mechanism for models to retrieve extra information to help them generate a response.
172172

173173
They're useful when it is impractical or impossible to put all the context an agent might need into the system prompt, or when you want to make agents' behavior more deterministic or reliable by deferring some of the logic required to generate a response to another (not necessarily AI-powered) tool.
174174

175-
!!! info "Retrievers vs. RAG"
176-
Retrievers are basically the "R" of RAG (Retrieval-Augmented Generation) — they augment what the model can do by letting it request extra information.
175+
!!! info "Function tools vs. RAG"
176+
Function tools are basically the "R" of RAG (Retrieval-Augmented Generation) — they augment what the model can do by letting it request extra information.
177177

178-
The main semantic difference between PydanticAI Retrievers and RAG is RAG is synonymous with vector search, while PydanticAI retrievers are more general-purpose. (Note: we may add support for vector search functionality in the future, particularly an API for generating embeddings. See [#58](https://github.com/pydantic/pydantic-ai/issues/58))
178+
The main semantic difference between PydanticAI Tools and RAG is RAG is synonymous with vector search, while PydanticAI tools are more general-purpose. (Note: we may add support for vector search functionality in the future, particularly an API for generating embeddings. See [#58](https://github.com/pydantic/pydantic-ai/issues/58))
179179

180-
There are two different decorator functions to register retrievers:
180+
There are two different decorator functions to register tools:
181181

182-
1. [`@agent.retriever_plain`][pydantic_ai.Agent.retriever_plain] — for retrievers that don't need access to the agent [context][pydantic_ai.dependencies.CallContext]
183-
2. [`@agent.retriever`][pydantic_ai.Agent.retriever] — for retrievers that do need access to the agent [context][pydantic_ai.dependencies.CallContext]
182+
1. [`@agent.tool`][pydantic_ai.Agent.tool] — for tools that need access to the agent [context][pydantic_ai.dependencies.CallContext]
183+
2. [`@agent.tool_plain`][pydantic_ai.Agent.tool_plain] — for tools that do not need access to the agent [context][pydantic_ai.dependencies.CallContext]
184184

185-
`@agent.retriever` is the default since in the majority of cases retrievers will need access to the agent context.
185+
`@agent.tool` is the default since in the majority of cases tools will need access to the agent context.
186186

187187
Here's an example using both:
188188

@@ -202,13 +202,13 @@ agent = Agent(
202202
)
203203

204204

205-
@agent.retriever_plain # (3)!
205+
@agent.tool_plain # (3)!
206206
def roll_die() -> str:
207207
"""Roll a six-sided die and return the result."""
208208
return str(random.randint(1, 6))
209209

210210

211-
@agent.retriever # (4)!
211+
@agent.tool # (4)!
212212
def get_player_name(ctx: CallContext[str]) -> str:
213213
"""Get the player's name."""
214214
return ctx.deps
@@ -221,8 +221,8 @@ print(dice_result.data)
221221

222222
1. This is a pretty simple task, so we can use the fast and cheap Gemini flash model.
223223
2. We pass the user's name as the dependency, to keep things simple we use just the name as a string as the dependency.
224-
3. This retriever doesn't need any context, it just returns a random number. You could probably use a dynamic system prompt in this case.
225-
4. This retriever needs the player's name, so it uses `CallContext` to access dependencies which are just the player's name in this case.
224+
3. This tool doesn't need any context, it just returns a random number. You could probably use a dynamic system prompt in this case.
225+
4. This tool needs the player's name, so it uses `CallContext` to access dependencies which are just the player's name in this case.
226226
5. Run the agent, passing the player's name as the dependency.
227227

228228
_(This example is complete, it can be run "as is")_
@@ -297,19 +297,19 @@ sequenceDiagram
297297
Note over Agent: Send prompts
298298
Agent ->> LLM: System: "You're a dice game..."<br>User: "My guess is 4"
299299
activate LLM
300-
Note over LLM: LLM decides to use<br>a retriever
300+
Note over LLM: LLM decides to use<br>a tool
301301
302-
LLM ->> Agent: Call retriever<br>roll_die()
302+
LLM ->> Agent: Call tool<br>roll_die()
303303
deactivate LLM
304304
activate Agent
305305
Note over Agent: Rolls a six-sided die
306306
307307
Agent -->> LLM: ToolReturn<br>"4"
308308
deactivate Agent
309309
activate LLM
310-
Note over LLM: LLM decides to use<br>another retriever
310+
Note over LLM: LLM decides to use<br>another tool
311311
312-
LLM ->> Agent: Call retriever<br>get_player_name()
312+
LLM ->> Agent: Call tool<br>get_player_name()
313313
deactivate LLM
314314
activate Agent
315315
Note over Agent: Retrieves player name
@@ -323,27 +323,29 @@ sequenceDiagram
323323
Note over Agent: Game session complete
324324
```
325325

326-
### Retrievers, tools, and schema
326+
### Function Tools vs. Structured Results
327327

328-
Under the hood, retrievers use the model's "tools" or "functions" API to let the model know what retrievers are available to call. Tools or functions are also used to define the schema(s) for structured responses, thus a model might have access to many tools, some of which call retrievers while others end the run and return a result.
328+
As the name suggests, function tools use the model's "tools" or "functions" API to let the model know what is available to call. Tools or functions are also used to define the schema(s) for structured responses, thus a model might have access to many tools, some of which call function tools while others end the run and return a result.
329+
330+
### Function tools and schema
329331

330332
Function parameters are extracted from the function signature, and all parameters except `CallContext` are used to build the schema for that tool call.
331333

332-
Even better, PydanticAI extracts the docstring from retriever functions and (thanks to [griffe](https://mkdocstrings.github.io/griffe/)) extracts parameter descriptions from the docstring and adds them to the schema.
334+
Even better, PydanticAI extracts the docstring from functions and (thanks to [griffe](https://mkdocstrings.github.io/griffe/)) extracts parameter descriptions from the docstring and adds them to the schema.
333335

334336
[Griffe supports](https://mkdocstrings.github.io/griffe/reference/docstrings/#docstrings) extracting parameter descriptions from `google`, `numpy` and `sphinx` style docstrings, and PydanticAI will infer the format to use based on the docstring. We plan to add support in the future to explicitly set the style to use, and warn/error if not all parameters are documented; see [#59](https://github.com/pydantic/pydantic-ai/issues/59).
335337

336-
To demonstrate a retriever's schema, here we use [`FunctionModel`][pydantic_ai.models.function.FunctionModel] to print the schema a model would receive:
338+
To demonstrate a tool's schema, here we use [`FunctionModel`][pydantic_ai.models.function.FunctionModel] to print the schema a model would receive:
337339

338-
```py title="retriever_schema.py"
340+
```py title="tool_schema.py"
339341
from pydantic_ai import Agent
340342
from pydantic_ai.messages import Message, ModelAnyResponse, ModelTextResponse
341343
from pydantic_ai.models.function import AgentInfo, FunctionModel
342344

343345
agent = Agent()
344346

345347

346-
@agent.retriever_plain
348+
@agent.tool_plain
347349
def foobar(a: int, b: str, c: dict[str, list[float]]) -> str:
348350
"""Get me foobar.
349351
@@ -356,10 +358,10 @@ def foobar(a: int, b: str, c: dict[str, list[float]]) -> str:
356358

357359

358360
def print_schema(messages: list[Message], info: AgentInfo) -> ModelAnyResponse:
359-
retriever = info.retrievers['foobar']
360-
print(retriever.description)
361+
tool = info.function_tools['foobar']
362+
print(tool.description)
361363
#> Get me foobar.
362-
print(retriever.json_schema)
364+
print(tool.json_schema)
363365
"""
364366
{
365367
'description': 'Get me foobar.',
@@ -386,22 +388,22 @@ agent.run_sync('hello', model=FunctionModel(print_schema))
386388

387389
_(This example is complete, it can be run "as is")_
388390

389-
The return type of retriever can be any valid JSON object ([`JsonData`][pydantic_ai.dependencies.JsonData]) as some models (e.g. Gemini) support semi-structured return values, some expect text (OpenAI) but seem to be just as good at extracting meaning from the data. If a Python object is returned and the model expects a string, the value will be serialized to JSON.
391+
The return type of tool can be any valid JSON object ([`JsonData`][pydantic_ai.dependencies.JsonData]) as some models (e.g. Gemini) support semi-structured return values, some expect text (OpenAI) but seem to be just as good at extracting meaning from the data. If a Python object is returned and the model expects a string, the value will be serialized to JSON.
390392

391-
If a retriever has a single parameter that can be represented as an object in JSON schema (e.g. dataclass, TypedDict, pydantic model), the schema for the retriever is simplified to be just that object. (TODO example)
393+
If a tool has a single parameter that can be represented as an object in JSON schema (e.g. dataclass, TypedDict, pydantic model), the schema for the tool is simplified to be just that object. (TODO example)
392394

393395
## Reflection and self-correction
394396

395-
Validation errors from both retriever parameter validation and [structured result validation](results.md#structured-result-validation) can be passed back to the model with a request to retry.
397+
Validation errors from both tool parameter validation and [structured result validation](results.md#structured-result-validation) can be passed back to the model with a request to retry.
396398

397-
You can also raise [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] from within a [retriever](#retrievers) or [result validator function](results.md#result-validators-functions) to tell the model it should retry generating a response.
399+
You can also raise [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] from within a [tool](#tools) or [result validator function](results.md#result-validators-functions) to tell the model it should retry generating a response.
398400

399-
- The default retry count is **1** but can be altered for the [entire agent][pydantic_ai.Agent.__init__], a [specific retriever][pydantic_ai.Agent.retriever], or a [result validator][pydantic_ai.Agent.__init__].
400-
- You can access the current retry count from within a retriever or result validator via [`ctx.retry`][pydantic_ai.dependencies.CallContext].
401+
- The default retry count is **1** but can be altered for the [entire agent][pydantic_ai.Agent.__init__], a [specific tool][pydantic_ai.Agent.tool], or a [result validator][pydantic_ai.Agent.__init__].
402+
- You can access the current retry count from within a tool or result validator via [`ctx.retry`][pydantic_ai.dependencies.CallContext].
401403

402404
Here's an example:
403405

404-
```py title="retriever_retry.py"
406+
```py title="tool_retry.py"
405407
from fake_database import DatabaseConn
406408
from pydantic import BaseModel
407409

@@ -420,7 +422,7 @@ agent = Agent(
420422
)
421423

422424

423-
@agent.retriever(retries=2)
425+
@agent.tool(retries=2)
424426
def get_user_by_name(ctx: CallContext[DatabaseConn], name: str) -> int:
425427
"""Get a user's ID from their full name."""
426428
print(name)
@@ -455,7 +457,7 @@ from pydantic_ai import Agent, ModelRetry, UnexpectedModelBehavior
455457
agent = Agent('openai:gpt-4o')
456458

457459

458-
@agent.retriever_plain
460+
@agent.tool_plain
459461
def calc_volume(size: int) -> int: # (1)!
460462
if size == 42:
461463
return size**3
@@ -467,7 +469,7 @@ try:
467469
result = agent.run_sync('Please get me the volume of a box with size 6.')
468470
except UnexpectedModelBehavior as e:
469471
print('An error occurred:', e)
470-
#> An error occurred: Retriever exceeded max retries count of 1
472+
#> An error occurred: Tool exceeded max retries count of 1
471473
print('cause:', repr(e.__cause__))
472474
#> cause: ModelRetry('Please try again.')
473475
print('messages:', agent.last_run_messages)
@@ -513,6 +515,6 @@ except UnexpectedModelBehavior as e:
513515
else:
514516
print(result.data)
515517
```
516-
1. Define a retriever that will raise `ModelRetry` repeatedly in this case.
518+
1. Define a tool that will raise `ModelRetry` repeatedly in this case.
517519

518520
_(This example is complete, it can be run "as is")_

docs/api/agent.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,6 @@
1212
- override_model
1313
- last_run_messages
1414
- system_prompt
15-
- retriever
16-
- retriever_plain
15+
- tool
16+
- tool_plain
1717
- result_validator

docs/dependencies.md

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,12 @@
11
# Dependencies
22

3-
PydanticAI uses a dependency injection system to provide data and services to your agent's [system prompts](agents.md#system-prompts), [retrievers](agents.md#retrievers) and [result validators](results.md#result-validators-functions).
3+
PydanticAI uses a dependency injection system to provide data and services to your agent's [system prompts](agents.md#system-prompts), [tools](agents.md#tools) and [result validators](results.md#result-validators-functions).
44

55
Matching PydanticAI's design philosophy, our dependency system tries to use existing best practice in Python development rather than inventing esoteric "magic", this should make dependencies type-safe, understandable easier to test and ultimately easier to deploy in production.
66

77
## Defining Dependencies
88

9-
Dependencies can be any python type. While in simple cases you might be able to pass a single object
10-
as a dependency (e.g. an HTTP connection), [dataclasses][] are generally a convenient container when your dependencies included multiple objects.
9+
Dependencies can be any python type. While in simple cases you might be able to pass a single object as a dependency (e.g. an HTTP connection), [dataclasses][] are generally a convenient container when your dependencies included multiple objects.
1110

1211
Here's an example of defining an agent that requires dependencies.
1312

@@ -102,7 +101,7 @@ _(This example is complete, it can be run "as is")_
102101

103102
### Asynchronous vs. Synchronous dependencies
104103

105-
System prompt functions, retriever functions and result validator are all run in the async context of an agent run.
104+
[System prompt functions](agents.md#system-prompts), [function tools](agents.md#function-tools) and [result validators](results.md#result-validators-functions) are all run in the async context of an agent run.
106105

107106
If these functions are not coroutines (e.g. `async def`) they are called with
108107
[`run_in_executor`][asyncio.loop.run_in_executor] in a thread pool, it's therefore marginally preferable
@@ -159,7 +158,7 @@ _(This example is complete, it can be run "as is")_
159158

160159
## Full Example
161160

162-
As well as system prompts, dependencies can be used in [retrievers](agents.md#retrievers) and [result validators](results.md#result-validators-functions).
161+
As well as system prompts, dependencies can be used in [tools](agents.md#tools) and [result validators](results.md#result-validators-functions).
163162

164163
```py title="full_example.py" hl_lines="27-35 38-48"
165164
from dataclasses import dataclass
@@ -188,7 +187,7 @@ async def get_system_prompt(ctx: CallContext[MyDeps]) -> str:
188187
return f'Prompt: {response.text}'
189188

190189

191-
@agent.retriever # (1)!
190+
@agent.tool # (1)!
192191
async def get_joke_material(ctx: CallContext[MyDeps], subject: str) -> str:
193192
response = await ctx.deps.http_client.get(
194193
'https://example.com#jokes',
@@ -220,7 +219,7 @@ async def main():
220219
#> Did you hear about the toothpaste scandal? They called it Colgate.
221220
```
222221

223-
1. To pass `CallContext` and to a retriever, us the [`retriever`][pydantic_ai.Agent.retriever] decorator.
222+
1. To pass `CallContext` to a tool, use the [`tool`][pydantic_ai.Agent.tool] decorator.
224223
2. `CallContext` may optionally be passed to a [`result_validator`][pydantic_ai.Agent.result_validator] function as the first argument.
225224

226225
_(This example is complete, it can be run "as is")_
@@ -324,7 +323,7 @@ joke_agent = Agent(
324323
factory_agent = Agent('gemini-1.5-pro', result_type=list[str])
325324

326325

327-
@joke_agent.retriever
326+
@joke_agent.tool
328327
async def joke_factory(ctx: CallContext[MyDeps], count: int) -> str:
329328
r = await ctx.deps.factory_agent.run(f'Please generate {count} jokes.')
330329
return '\n'.join(r.data)

docs/examples/bank-support.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Demonstrates:
44

55
* [dynamic system prompt](../agents.md#system-prompts)
66
* [structured `result_type`](../results.md#structured-result-validation)
7-
* [retrievers](../agents.md#retrievers)
7+
* [tools](../agents.md#tools)
88

99
## Running the Example
1010

docs/examples/rag.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,12 +4,12 @@ RAG search example. This demo allows you to ask question of the [logfire](https:
44

55
Demonstrates:
66

7-
* [retrievers](../agents.md#retrievers)
7+
* [tools](../agents.md#tools)
88
* [agent dependencies](../dependencies.md)
99
* RAG search
1010

1111
This is done by creating a database containing each section of the markdown documentation, then registering
12-
the search tool as a retriever with the PydanticAI agent.
12+
the search tool with the PydanticAI agent.
1313

1414
Logic for extracting sections from markdown files and a JSON file with that data is available in
1515
[this gist](https://gist.github.com/samuelcolvin/4b5bb9bb163b1122ff17e29e48c10992).

docs/examples/weather-agent.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ Example of PydanticAI with multiple tools which the LLM needs to call in turn to
22

33
Demonstrates:
44

5-
* [retrievers](../agents.md#retrievers)
5+
* [tools](../agents.md#tools)
66
* [agent dependencies](../dependencies.md)
77

88
In this case the idea is a "weather" agent — the user can ask for the weather in multiple locations,

0 commit comments

Comments
 (0)