Skip to content

Commit a00560c

Browse files
authored
Agent docs (#54)
1 parent 8895d79 commit a00560c

27 files changed

+326
-106
lines changed

docs/agents.md

Lines changed: 203 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,203 @@
1+
## Introduction
2+
3+
Agents are PydanticAI's primary interface for interacting with LLMs.
4+
5+
In some use cases a single Agent will control an entire application or component,
6+
but multiple agents can also interact to embody more complex workflows.
7+
8+
The [`Agent`][pydantic_ai.Agent] class is well documented, but in essence you can think of an agent as a container for:
9+
10+
* A [system prompt](#system-prompts) — a set of instructions for the LLM written by the developer
11+
* One or more [retrievers](#retrievers) — functions that the LLM may call to get information while generating a response
12+
* An optional structured [result type](results.md) — the structured datatype the LLM must return at the end of a run
13+
* A [dependency](dependencies.md) type constraint — system prompt functions, retrievers and result validators may all use dependencies when they're run
14+
* Agents may optionally also have a default [model](models/index.md) associated with them, the model to use can also be defined when running the agent
15+
16+
In typing terms, agents are generic in their dependency and result types, e.g. an agent which required `#!python Foobar` dependencies and returned data of type `#!python list[str]` results would have type `#!python Agent[Foobar, list[str]]`.
17+
18+
Here's a toy example of an agent that simulates a roulette wheel:
19+
20+
```py title="roulette_wheel.py"
21+
from pydantic_ai import Agent, CallContext
22+
23+
roulette_agent = Agent( # (1)!
24+
'openai:gpt-4o',
25+
deps_type=int,
26+
result_type=bool,
27+
system_prompt=(
28+
'Use the `roulette_wheel` to see if the '
29+
'customer has won based on the number they provide.'
30+
),
31+
)
32+
33+
34+
@roulette_agent.retriever_context
35+
async def roulette_wheel(ctx: CallContext[int], square: int) -> str: # (2)!
36+
"""check if the square is a winner"""
37+
return 'winner' if square == ctx.deps else 'loser'
38+
39+
40+
# Run the agent
41+
success_number = 18 # (3)!
42+
result = roulette_agent.run_sync('Put my money on square eighteen', deps=success_number)
43+
print(result.data) # (4)!
44+
#> True
45+
46+
result = roulette_agent.run_sync('I bet five is the winner', deps=success_number)
47+
print(result.data)
48+
#> False
49+
```
50+
51+
1. Create an agent, which expects an integer dependency and returns a boolean result, this agent will ahve type of `#!python Agent[int, bool]`.
52+
2. Define a retriever that checks if the square is a winner, here [`CallContext`][pydantic_ai.dependencies.CallContext] is parameterized with the dependency type `int`, if you got the dependency type wrong you'd get a typing error.
53+
3. In reality, you might want to use a random number here e.g. `random.randint(0, 36)` here.
54+
4. `result.data` will be a boolean indicating if the square is a winner, Pydantic performs the result validation, it'll be typed as a `bool` since its type is derived from the `result_type` generic parameter of the agent.
55+
56+
!!! tip "Agents are Singletons, like FastAPI"
57+
Agents are a singleton instance, you can think of them as similar to a small [`FastAPI`][fastapi.FastAPI] app or an [`APIRouter`][fastapi.APIRouter].
58+
59+
## Running Agents
60+
61+
There are three ways to run an agent:
62+
63+
1. [`#!python agent.run()`][pydantic_ai.Agent.run] — a coroutine which returns a result containing a completed response, returns a [`RunResult`][pydantic_ai.result.RunResult]
64+
2. [`#!python agent.run_sync()`][pydantic_ai.Agent.run_sync] — a plain function which returns a result containing a completed response (internally, this just calls `#!python asyncio.run(self.run())`), returns a [`RunResult`][pydantic_ai.result.RunResult]
65+
3. [`#!python agent.run_stream()`][pydantic_ai.Agent.run_stream] — a coroutine which returns a result containing methods to stream a response as an async iterable, returns a [`StreamedRunResult`][pydantic_ai.result.StreamedRunResult]
66+
67+
Here's a simple example demonstrating all three:
68+
69+
```python title="run_agent.py"
70+
from pydantic_ai import Agent
71+
72+
agent = Agent('openai:gpt-4o')
73+
74+
result_sync = agent.run_sync('What is the capital of Italy?')
75+
print(result_sync.data)
76+
#> Rome
77+
78+
79+
async def main():
80+
result = await agent.run('What is the capital of France?')
81+
print(result.data)
82+
#> Paris
83+
84+
async with agent.run_stream('What is the capital of the UK?') as response:
85+
print(await response.get_data())
86+
#> London
87+
```
88+
_(This example is complete, it can be run "as is")_
89+
90+
You can also pass messages from previous runs to continue a conversation or provide context, as described in [Messages and Chat History](message-history.md).
91+
92+
## Runs vs. Conversations
93+
94+
An agent **run** might represent an entire conversation — there's no limit to how many messages can be exchanged in a single run. However, a **conversation** might also be composed of multiple runs, especially if you need to maintain state between separate interactions or API calls.
95+
96+
Here's an example of a conversation comprised of multiple runs:
97+
98+
```python title="conversation_example.py"
99+
from pydantic_ai import Agent
100+
101+
agent = Agent('openai:gpt-4o')
102+
103+
# First run
104+
result1 = agent.run_sync('Who was Albert Einstein?')
105+
print(result1.data)
106+
#> Albert Einstein was a German-born theoretical physicist.
107+
108+
# Second run, passing previous messages
109+
result2 = agent.run_sync(
110+
'What was his most famous equation?', message_history=result1.new_messages() # (1)!
111+
)
112+
print(result2.data)
113+
#> Albert Einstein's most famous equation is (E = mc^2).
114+
```
115+
1. Continue the conversation, without `message_history` the model would not know who "he" was referring to.
116+
117+
## System Prompts
118+
119+
System prompts might seem simple at first glance since they're just strings (or sequences of strings that are concatenated), but crafting the right system prompt is key to getting the model to behave as you want.
120+
121+
Generally, system prompts fall into two categories:
122+
123+
1. **Static system prompts**: These are known when writing the code and can be defined via the `system_prompt` parameter of the `Agent` constructor.
124+
2. **Dynamic system prompts**: These aren't known until runtime and should be defined via functions decorated with `@agent.system_prompt`.
125+
126+
You can add both to a single agent; they're concatenated in the order they're defined at runtime.
127+
128+
Here's an example using both types of system prompts:
129+
130+
```python title="system_prompts.py"
131+
from datetime import date
132+
133+
from pydantic_ai import Agent, CallContext
134+
135+
agent = Agent(
136+
'openai:gpt-4o',
137+
deps_type=str, # (1)!
138+
system_prompt="Use the customer's name while replying to them.", # (2)!
139+
)
140+
141+
142+
@agent.system_prompt # (3)!
143+
def add_the_users_name(ctx: CallContext[str]) -> str:
144+
return f"The user's named is {ctx.deps}."
145+
146+
147+
@agent.system_prompt
148+
def add_the_date() -> str: # (4)!
149+
return f'The date is {date.today()}.'
150+
151+
152+
result = agent.run_sync('What is the date?', deps='Frank')
153+
print(result.data)
154+
#> Hello Frank, the date today is 2032-01-02.
155+
```
156+
157+
1. The agent expects a string dependency.
158+
2. Static system prompt defined at agent creation time.
159+
3. Dynamic system prompt defined via a decorator.
160+
4. Another dynamic system prompt, system prompts don't have to have the `CallContext` parameter.
161+
162+
## Retrievers
163+
164+
* two different retriever decorators (`retriver_plain` and `retriever_context`) depending on whether you want to use the context or not, show an example using both
165+
* retriever parameters are extracted and used to build the schema for the tool, then validated with pydantic
166+
* if a retriever has a single "model like" parameter (e.g. pydantic mode, dataclass, typed dict), the schema for the tool will but just that type
167+
* docstrings are parsed to get the tool description, thanks to griffe docs for each parameter are extracting using Google, numpy or sphinx docstring styling
168+
* You can raise `ModelRetry` from within a retriever to suggest to the model it should retry
169+
* the return type of retriever can either be `str` or a JSON object typed as `dict[str, Any]` as some models (e.g. Gemini) support structured return values, some expect text (OpenAI) but seem to be just as good at extracting meaning from the data
170+
171+
## Reflection and self-correction
172+
173+
* validation errors from both retrievers parameter validation and structured result validation can be passed back to the with a request to retry
174+
* as described above, you can also raise `ModelRetry` from within a retriever or result validator to tell the model it should retry
175+
* the default retry count is 1, but can be altered both on a whole agent, or on a per-retriever basis and result validator basis
176+
* you can access the current retry count from within a retriever or result validator via `ctx.retry`
177+
178+
## Model errors
179+
180+
* If models behave unexpectedly, e.g. the retry limit is exceed, agent runs will raise `UnexpectedModelBehaviour` exceptions
181+
* If you use PydanticAI in correctly, we try to raise a `UserError` with a helpful message
182+
* show an except of a `UnexpectedModelBehaviour` being raised
183+
* if a `UnexpectedModelBehaviour` is raised, you may want to access the [`.last_run_messages`][pydantic_ai.Agent.last_run_messages] attribute of an agent to see the messages exchanged that led to the error, show an example of accessing `.last_run_messages` in an except block to get more details
184+
185+
## API Reference
186+
187+
::: pydantic_ai.Agent
188+
options:
189+
members:
190+
- __init__
191+
- run
192+
- run_sync
193+
- run_stream
194+
- model
195+
- override_deps
196+
- override_model
197+
- last_run_messages
198+
- system_prompt
199+
- retriever_plain
200+
- retriever_context
201+
- result_validator
202+
203+
::: pydantic_ai.exceptions

docs/api/agent.md

Lines changed: 0 additions & 17 deletions
This file was deleted.

docs/api/dependencies.md

Lines changed: 0 additions & 3 deletions
This file was deleted.

docs/api/exceptions.md

Lines changed: 0 additions & 3 deletions
This file was deleted.

docs/api/messages.md

Lines changed: 0 additions & 17 deletions
This file was deleted.

docs/concepts/agents.md

Whitespace-only changes.

docs/concepts/results.md

Whitespace-only changes.

docs/concepts/retrievers.md

Whitespace-only changes.

docs/concepts/streaming.md

Whitespace-only changes.

docs/concepts/system-prompt.md

Whitespace-only changes.

0 commit comments

Comments
 (0)