Skip to content

Commit 8785aa3

Browse files
authored
improvements to docs index (#102)
1 parent 0592972 commit 8785aa3

File tree

10 files changed

+257
-209
lines changed

10 files changed

+257
-209
lines changed

docs/.hooks/main.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -57,18 +57,19 @@ def sub_example(m: re.Match[str]) -> str:
5757

5858

5959
def render_video(markdown: str) -> str:
60-
return re.sub(r'\{\{ *video\((["\'])(.+?)\1(?:, (\d+))?\) *\}\}', sub_cf_video, markdown)
60+
return re.sub(r'\{\{ *video\((["\'])(.+?)\1(?:, (\d+))?(?:, (\d+))?\) *\}\}', sub_cf_video, markdown)
6161

6262

6363
def sub_cf_video(m: re.Match[str]) -> str:
6464
video_id = m.group(2)
6565
time = m.group(3)
6666
time = f'{time}s' if time else ''
67+
padding_top = m.group(4) or '67'
6768

6869
domain = 'https://customer-nmegqx24430okhaq.cloudflarestream.com'
6970
poster = f'{domain}/{video_id}/thumbnails/thumbnail.jpg?time={time}&height=600'
7071
return f"""
71-
<div style="position: relative; padding-top: 67%;">
72+
<div style="position: relative; padding-top: {padding_top}%;">
7273
<iframe
7374
src="{domain}/{video_id}/iframe?poster={urllib.parse.quote_plus(poster)}"
7475
loading="lazy"

docs/agents.md

Lines changed: 61 additions & 61 deletions
Original file line numberDiff line numberDiff line change
@@ -119,6 +119,67 @@ print(result2.data)
119119

120120
_(This example is complete, it can be run "as is")_
121121

122+
## Type safe by design {#static-type-checking}
123+
124+
PydanticAI is designed to work well with static type checkers, like mypy and pyright.
125+
126+
!!! tip "Typing is (somewhat) optional"
127+
PydanticAI is designed to make type checking as useful as possible for you if you choose to use it, but you don't have to use types everywhere all the time.
128+
129+
That said, because PydanticAI uses Pydantic, and Pydantic uses type hints as the definition for schema and validation, some types (specifically type hints on parameters to tools, and the `result_type` arguments to [`Agent`][pydantic_ai.Agent]) are used at runtime.
130+
131+
We (the library developers) have messed up if type hints are confusing you more than they're help you, if you find this, please create an [issue](https://github.com/pydantic/pydantic-ai/issues) explaining what's annoying you!
132+
133+
In particular, agents are generic in both the type of their dependencies and the type of results they return, so you can use the type hints to ensure you're using the right types.
134+
135+
Consider the following script with type mistakes:
136+
137+
```py title="type_mistakes.py" hl_lines="18 28"
138+
from dataclasses import dataclass
139+
140+
from pydantic_ai import Agent, RunContext
141+
142+
143+
@dataclass
144+
class User:
145+
name: str
146+
147+
148+
agent = Agent(
149+
'test',
150+
deps_type=User, # (1)!
151+
result_type=bool,
152+
)
153+
154+
155+
@agent.system_prompt
156+
def add_user_name(ctx: RunContext[str]) -> str: # (2)!
157+
return f"The user's name is {ctx.deps}."
158+
159+
160+
def foobar(x: bytes) -> None:
161+
pass
162+
163+
164+
result = agent.run_sync('Does their name start with "A"?', deps=User('Adam'))
165+
foobar(result.data) # (3)!
166+
```
167+
168+
1. The agent is defined as expecting an instance of `User` as `deps`.
169+
2. But here `add_user_name` is defined as taking a `str` as the dependency, not a `User`.
170+
3. Since the agent is defined as returning a `bool`, this will raise a type error since `foobar` expects `bytes`.
171+
172+
Running `mypy` on this will give the following output:
173+
174+
```bash
175+
➤ uv run mypy type_mistakes.py
176+
type_mistakes.py:18: error: Argument 1 to "system_prompt" of "Agent" has incompatible type "Callable[[RunContext[str]], str]"; expected "Callable[[RunContext[User]], str]" [arg-type]
177+
type_mistakes.py:28: error: Argument 1 to "foobar" has incompatible type "bool"; expected "bytes" [arg-type]
178+
Found 2 errors in 1 file (checked 1 source file)
179+
```
180+
181+
Running `pyright` would identify the same issues.
182+
122183
## System Prompts
123184

124185
System prompts might seem simple at first glance since they're just strings (or sequences of strings that are concatenated), but crafting the right system prompt is key to getting the model to behave as you want.
@@ -514,64 +575,3 @@ else:
514575
1. Define a tool that will raise `ModelRetry` repeatedly in this case.
515576

516577
_(This example is complete, it can be run "as is")_
517-
518-
## Static Type Checking
519-
520-
PydanticAI is designed to work well with static type checkers, like mypy and pyright.
521-
522-
!!! tip "mypy vs. pyright"
523-
[mypy](https://github.com/python/mypy) and [pyright](https://github.com/microsoft/pyright) are both static type checkers for Python.
524-
525-
Mypy was the first and is still generally considered the default, in part because it was developed parly by Guido van Rossum, the creator of Python.
526-
527-
Pyright is generally faster and more sophisticated. It is develoepd by Eric Trout for use in VSCode, since that's its primary use case, it's terminal output is more verbose and harder to read than that of mypy.
528-
529-
In particular, agents are generic in both the type of their dependencies and the type of results they return, so you can use the type hints to ensure you're using the right types.
530-
531-
Consider the following script with type mistakes:
532-
533-
```py title="type_mistakes.py" hl_lines="18 28"
534-
from dataclasses import dataclass
535-
536-
from pydantic_ai import Agent, RunContext
537-
538-
539-
@dataclass
540-
class User:
541-
name: str
542-
543-
544-
agent = Agent(
545-
'test',
546-
deps_type=User, # (1)!
547-
result_type=bool,
548-
)
549-
550-
551-
@agent.system_prompt
552-
def add_user_name(ctx: RunContext[str]) -> str: # (2)!
553-
return f"The user's name is {ctx.deps}."
554-
555-
556-
def foobar(x: bytes) -> None:
557-
pass
558-
559-
560-
result = agent.run_sync('Does their name start with "A"?', deps=User('Adam'))
561-
foobar(result.data) # (3)!
562-
```
563-
564-
1. The agent is defined as expecting an instance of `User` as `deps`.
565-
2. But here `add_user_name` is defined as taking a `str` as the dependency, not a `User`.
566-
3. Since the agent is defined as returning a `bool`, this will raise a type error since `foobar` expects `bytes`.
567-
568-
Running `mypy` on this will give the following output:
569-
570-
```bash
571-
➤ uv run mypy type_mistakes.py
572-
type_mistakes.py:18: error: Argument 1 to "system_prompt" of "Agent" has incompatible type "Callable[[RunContext[str]], str]"; expected "Callable[[RunContext[User]], str]" [arg-type]
573-
type_mistakes.py:28: error: Argument 1 to "foobar" has incompatible type "bool"; expected "bytes" [arg-type]
574-
Found 2 errors in 1 file (checked 1 source file)
575-
```
576-
577-
Running `pyright` would identify the same issues.

docs/index.md

Lines changed: 31 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -10,10 +10,10 @@ PydanticAI is a Python Agent Framework designed to make it less painful to build
1010

1111
## Why use PydanticAI
1212

13-
* Built by the team behind Pydantic (the validation layer of the OpenAI SDK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, Instructor and many more)
14-
* Model-agnostic — currently both OpenAI, Gemini, and Groq are supported, Anthropic [is coming soon](https://github.com/pydantic/pydantic-ai/issues/63). And there is a simple interface to implement support for other models.
15-
* Type-safe
16-
* Control flow and composing agents is done with vanilla python, allowing you to make use of the same Python development best practices you'd use in any other (non-AI) project
13+
* Built by the team behind Pydantic (the validation layer of the OpenAI SDK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more)
14+
* Model-agnostic — currently OpenAI, Gemini, and Groq are supported, Anthropic [is coming soon](https://github.com/pydantic/pydantic-ai/issues/63). And there is a simple interface to implement support for other models.
15+
* [Type-safe](agents.md#static-type-checking)
16+
* Control flow and agent composition is done with vanilla Python, allowing you to make use of the same Python development best practices you'd use in any other (non-AI) project
1717
* [Structured response](results.md#structured-result-validation) validation with Pydantic
1818
* [Streamed responses](results.md#streamed-results), including validation of streamed _structured_ responses with Pydantic
1919
* Novel, type-safe [dependency injection system](dependencies.md), useful for testing and eval-driven iterative development
@@ -124,66 +124,47 @@ async def main():
124124

125125
1. This [agent](agents.md) will act as first-tier support in a bank. Agents are generic in the type of dependencies they accept and the type of result they return. In this case, the support agent has type `#!python Agent[SupportDependencies, SupportResult]`.
126126
2. Here we configure the agent to use [OpenAI's GPT-4o model](api/models/openai.md), you can also set the model when running the agent.
127-
3. The `SupportDependencies` dataclass is used to pass data, connections, and logic into the model that will be needed when running [system prompt](agents.md#system-prompts) and [tool](agents.md#function-tools) functions. PydanticAI's system of dependency injection provides a type-safe way to customise the behavior of your agents, and can be especially useful when running unit tests and evals.
127+
3. The `SupportDependencies` dataclass is used to pass data, connections, and logic into the model that will be needed when running [system prompt](agents.md#system-prompts) and [tool](agents.md#function-tools) functions. PydanticAI's system of dependency injection provides a [type-safe](agents.md#static-type-checking) way to customise the behavior of your agents, and can be especially useful when running [unit tests](testing-evals.md) and evals.
128128
4. Static [system prompts](agents.md#system-prompts) can be registered with the [`system_prompt` keyword argument][pydantic_ai.Agent.__init__] to the agent.
129129
5. Dynamic [system prompts](agents.md#system-prompts) can be registered with the [`@agent.system_prompt`][pydantic_ai.Agent.system_prompt] decorator, and can make use of dependency injection. Dependencies are carried via the [`RunContext`][pydantic_ai.dependencies.RunContext] argument, which is parameterized with the `deps_type` from above. If the type annotation here is wrong, static type checkers will catch it.
130-
6. [Tools](agents.md#function-tools) let you register "tools" which the LLM may call while responding to a user. Again, dependencies are carried via [`RunContext`][pydantic_ai.dependencies.RunContext], and any other arguments become the tool schema passed to the LLM. Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.
130+
6. [`tool`](agents.md#function-tools) let you register functions which the LLM may call while responding to a user. Again, dependencies are carried via [`RunContext`][pydantic_ai.dependencies.RunContext], and any other arguments become the tool schema passed to the LLM. Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.
131131
7. The docstring of a tool is also passed to the LLM as the description of the tool. Parameter descriptions are [extracted](agents.md#function-tools-and-schema) from the docstring and added to the tool schema sent to the LLM.
132132
8. [Run the agent](agents.md#running-agents) asynchronously, conducting a conversation with the LLM until a final response is reached. Even in this fairly simple case, the agent will exchange multiple messages with the LLM as tools are called to retrieve a result.
133133
9. The response from the agent will, be guaranteed to be a `SupportResult`, if validation fails [reflection](agents.md#reflection-and-self-correction) will mean the agent is prompted to try again.
134134
10. The result will be validated with Pydantic to guarantee it is a `SupportResult`, since the agent is generic, it'll also be typed as a `SupportResult` to aid with static type checking.
135-
11. In a real use case, you'd add many more tools and a longer system prompt to the agent to extend the context it's equipped with and support it can provide.
135+
11. In a real use case, you'd add more tools and a longer system prompt to the agent to extend the context it's equipped with and support it can provide.
136136
12. This is a simple sketch of a database connection, used to keep the example short and readable. In reality, you'd be connecting to an external database (e.g. PostgreSQL) to get information about customers.
137-
13. This [Pydantic](https://docs.pydantic.dev) model is used to constrain the structured data returned by the agent. From this simple definition, Pydantic builds the JSON Schema that tells the LLM how to return the data, and performs validation to guarantee the data is correct at the end of the conversation.
138-
139-
To help make things more clear, here is a diagram of what is happening in the `#!python await support_agent.run('What is my balance?', deps=deps)` call within `main`:
140-
```mermaid
141-
sequenceDiagram
142-
participant DatabaseConn
143-
participant Agent
144-
participant LLM
145-
146-
Note over Agent: Dynamic system prompt<br>add_customer_name()
147-
Agent ->> DatabaseConn: Retrieve customer name
148-
activate DatabaseConn
149-
DatabaseConn -->> Agent: "John"
150-
deactivate DatabaseConn
151-
152-
Note over Agent: User query
153-
154-
Agent ->> LLM: Request<br>System: "You are a support agent..."<br>System: "The customer's name is John"<br>User: "What is my balance?"
155-
activate LLM
156-
Note over LLM: LLM decides to use a tool
157-
LLM ->> Agent: Call tool<br>customer_balance()
158-
deactivate LLM
159-
activate Agent
160-
Note over Agent: Retrieve account balance
161-
162-
Agent ->> DatabaseConn: Retrieve balance<br>Include pending
163-
activate DatabaseConn
164-
DatabaseConn -->> Agent: "$123.45"
165-
deactivate DatabaseConn
166-
167-
Agent -->> LLM: ToolReturn<br>"$123.45"
168-
deactivate Agent
169-
activate LLM
170-
Note over LLM: LLM processes response
171-
172-
LLM ->> Agent: StructuredResponse<br>SupportResult
173-
deactivate LLM
174-
activate Agent
175-
Note over Agent: Support session complete
176-
deactivate Agent
177-
```
178-
137+
13. This [Pydantic](https://docs.pydantic.dev) model is used to constrain the structured data returned by the agent. From this simple definition, Pydantic builds the JSON Schema that tells the LLM how to return the data, and performs validation to guarantee the data is correct at the end of the run.
179138

180139
!!! tip "Complete `bank_support.py` example"
181140
The code included here is incomplete for the sake of brevity (the definition of `DatabaseConn` is missing); you can find the complete `bank_support.py` example [here](examples/bank-support.md).
182141

142+
## Instrumentation with Pydantic Logfire
143+
144+
To understand the flow of the above runs, we can watch the agent in action using Pydantic Logfire.
145+
146+
To do this, we need to set up logfire, and add the following to our code:
147+
148+
```py title="bank_support_with_logfire.py"
149+
import logfire
150+
151+
logfire.configure() # (1)!
152+
logfire.instrument_asyncpg() # (2)!
153+
```
154+
155+
1. Configure logfire, this will fail if not project is set up.
156+
2. In our demo, `DatabaseConn` uses [`asyncpg`]() to connect to a PostgreSQL database, so [`logfire.instrument_asyncpg()`](https://magicstack.github.io/asyncpg/current/) is used to log the database queries.
157+
158+
That's enough to get the following view of your agent in action:
159+
160+
{{ video('9078b98c4f75d01f912a0368bbbdb97a', 25, 55) }}
161+
162+
See [Monitoring and Performance](logfire.md) to learn more.
163+
183164
## Next Steps
184165

185166
To try PydanticAI yourself, follow the instructions [in the examples](examples/index.md).
186167

187-
Read the [conceptual documentation](agents.md) to learn more about building applications with PydanticAI.
168+
Read the [docs](agents.md) to learn more about building applications with PydanticAI.
188169

189170
Read the [API Reference](api/agent.md) to understand PydanticAI's interface.

docs/testing-evals.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
With PydanticAI and LLM integrations in general, there are two distinct kinds of test:
44

55
1. **Unit tests** — tests of your application code, and whether it's behaving correctly
6-
2. **"Evals"** — tests of the LLM, and how good or bad its responses are
6+
2. **Evals** — tests of the LLM, and how good or bad its responses are
77

88
For the most part, these two kinds of tests have pretty separate goals and considerations.
99

pydantic_ai_slim/pydantic_ai/agent.py

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -75,8 +75,8 @@ class Agent(Generic[AgentDeps, ResultData]):
7575
def __init__(
7676
self,
7777
model: models.Model | models.KnownModelName | None = None,
78-
result_type: type[ResultData] = str,
7978
*,
79+
result_type: type[ResultData] = str,
8080
system_prompt: str | Sequence[str] = (),
8181
deps_type: type[AgentDeps] = NoneType,
8282
retries: int = 1,
@@ -150,21 +150,21 @@ async def run(
150150

151151
deps = self._get_deps(deps)
152152

153-
new_message_index, messages = await self._prepare_messages(deps, user_prompt, message_history)
154-
self.last_run_messages = messages
155-
156-
for tool in self._function_tools.values():
157-
tool.reset()
158-
159-
cost = result.Cost()
160-
161153
with _logfire.span(
162154
'agent run {prompt=}',
163155
prompt=user_prompt,
164156
agent=self,
165157
custom_model=custom_model,
166158
model_name=model_used.name(),
167159
) as run_span:
160+
new_message_index, messages = await self._prepare_messages(deps, user_prompt, message_history)
161+
self.last_run_messages = messages
162+
163+
for tool in self._function_tools.values():
164+
tool.reset()
165+
166+
cost = result.Cost()
167+
168168
run_step = 0
169169
while True:
170170
run_step += 1
@@ -243,21 +243,21 @@ async def run_stream(
243243

244244
deps = self._get_deps(deps)
245245

246-
new_message_index, messages = await self._prepare_messages(deps, user_prompt, message_history)
247-
self.last_run_messages = messages
248-
249-
for tool in self._function_tools.values():
250-
tool.reset()
251-
252-
cost = result.Cost()
253-
254246
with _logfire.span(
255247
'agent run stream {prompt=}',
256248
prompt=user_prompt,
257249
agent=self,
258250
custom_model=custom_model,
259251
model_name=model_used.name(),
260252
) as run_span:
253+
new_message_index, messages = await self._prepare_messages(deps, user_prompt, message_history)
254+
self.last_run_messages = messages
255+
256+
for tool in self._function_tools.values():
257+
tool.reset()
258+
259+
cost = result.Cost()
260+
261261
run_step = 0
262262
while True:
263263
run_step += 1

tests/example_modules/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# docs examples imports
2+
3+
This directory is added to `sys.path` in `tests/test_examples.py::test_docs_examples` to augment some of the examples.

0 commit comments

Comments
 (0)