Skip to content

Commit be83898

Browse files
authored
New intro (#64)
1 parent cd47b0f commit be83898

File tree

13 files changed

+303
-268
lines changed

13 files changed

+303
-268
lines changed

docs/examples/bank-support.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
Small but complete example of using PydanticAI to build a support agent for a bank.
2+
3+
Demonstrates:
4+
5+
* [dynamic system prompt](../agents.md#system-prompts)
6+
* [structured `result_type`](../results.md#structured-result-validation)
7+
* [retrievers](../agents.md#retrievers)
8+
9+
## Running the Example
10+
11+
With [dependencies installed and environment variables set](./index.md#usage), run:
12+
13+
```bash
14+
python/uv-run -m pydantic_ai_examples.bank_support
15+
```
16+
17+
(or `PYDANTIC_AI_MODEL=gemini-1.5-flash ...`)
18+
19+
## Example Code
20+
21+
```py title="bank_support.py"
22+
#! pydantic_ai_examples/bank_support.py
23+
```

docs/examples/chat-app.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@ Simple chat app example build with FastAPI.
33
Demonstrates:
44

55
* [reusing chat history](../message-history.md)
6-
* serializing messages
7-
* streaming responses
6+
* [serializing messages](../message-history.md#accessing-messages-from-results)
7+
* [streaming responses](../results.md#streamed-results)
88

99
This demonstrates storing chat history between requests and using it to give the model context for new responses.
1010

docs/examples/index.md

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,17 @@ For examples, to run the very simple [`pydantic_model`](./pydantic-model.md) exa
6161
python/uv-run -m pydantic_ai_examples.pydantic_model
6262
```
6363

64-
But you'll probably want to edit examples in addition to just running them. You can copy the examples to a new directory with:
64+
If you like on-liners and you're using uv, you can run a pydantic-ai example with zero setup:
65+
66+
```bash
67+
OPENAI_API_KEY='your-api-key' \
68+
uv run --with 'pydantic-ai[examples]' \
69+
-m pydantic_ai_examples.pydantic_model
70+
```
71+
72+
---
73+
74+
You'll probably want to edit examples in addition to just running them. You can copy the examples to a new directory with:
6575

6676
```bash
6777
python/uv-run -m pydantic_ai_examples --copy-to examples/

docs/examples/pydantic-model.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ Simple example of using Pydantic AI to construct a Pydantic model from a text in
22

33
Demonstrates:
44

5-
* custom `result_type`
5+
* [structured `result_type`](../results.md#structured-result-validation)
66

77
## Running the Example
88

docs/examples/rag.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ RAG search example. This demo allows you to ask question of the [logfire](https:
44

55
Demonstrates:
66

7-
* retrievers
7+
* [retrievers](../agents.md#retrievers)
88
* [agent dependencies](../dependencies.md)
99
* RAG search
1010

docs/examples/sql-gen.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,9 @@ Example demonstrating how to use Pydantic AI to generate SQL queries based on us
44

55
Demonstrates:
66

7-
* custom `result_type`
8-
* dynamic system prompt
9-
* result validation
7+
* [dynamic system prompt](../agents.md#system-prompts)
8+
* [structured `result_type`](../results.md#structured-result-validation)
9+
* [result validation](../results.md#result-validators-functions)
1010
* [agent dependencies](../dependencies.md)
1111

1212
## Running the Example

docs/examples/weather-agent.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,7 @@ Example of Pydantic AI with multiple tools which the LLM needs to call in turn t
22

33
Demonstrates:
44

5-
* retrievers
6-
* multiple retrievers
5+
* [retrievers](../agents.md#retrievers)
76
* [agent dependencies](../dependencies.md)
87

98
In this case the idea is a "weather" agent — the user can ask for the weather in multiple locations,

docs/index.md

Lines changed: 111 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -1,78 +1,135 @@
1+
# Introduction {.hide}
2+
13
--8<-- "docs/.partials/index-header.html"
24

3-
# PydanticAI {.hide}
5+
When I first found FastAPI, I got it immediately, I was excited to find something so genuinely innovative and yet ergonomic built on Pydantic.
6+
7+
Virtually every Agent Framework and LLM library in Python uses Pydantic, but when we came to use Gen AI in [Pydantic Logfire](https://pydantic.dev/logfire), I couldn't find anything that gave me the same feeling.
8+
9+
PydanticAI is a Python Agent Framework designed to make it less painful to build production grade applications with Generative AI.
410

5-
You can think of PydanticAI as an Agent Framework or a shim to use Pydantic with LLMs — they're the same thing.
11+
## Why use PydanticAI
612

7-
PydanticAI tries to make working with LLMs feel similar to building a web application.
13+
* Built by the team behind Pydantic (the validation layer of the OpenAI SDK, the Anthropic SDK, Langchain, LlamaIndex, AutoGPT, Transformers, Instructor and many more)
14+
* Multi-model — currently with OpenAI and Gemini are support, Anthropic [coming soon](https://github.com/pydantic/pydantic-ai/issues/63), simply interface to implement other models or adapt existing ones
15+
* Type-safe
16+
* Built on tried and tested best practices in Python
17+
* Structured response validation with Pydantic
18+
* Streamed responses, including validation of streamed structured responses with Pydantic
19+
* Novel, type-safe dependency injection system
20+
* Logfire integration
821

922
!!! example "In Beta"
1023
PydanticAI is in early beta, the API is subject to change and there's a lot more to do.
1124
[Feedback](https://github.com/pydantic/pydantic-ai/issues) is very welcome!
1225

26+
## Example — Hello World
27+
28+
Here's a very minimal example of PydanticAI.
29+
30+
```py title="hello_world.py"
31+
from pydantic_ai import Agent
32+
33+
agent = Agent('gemini-1.5-flash', system_prompt='Be concise, reply with one sentence.')
34+
35+
result = agent.run_sync('Where does "hello world" come from?')
36+
print(result.data)
37+
"""
38+
The first known use of "hello, world" was in a 1974 textbook about the C programming language.
39+
"""
40+
```
41+
_(This example is complete, it can be run "as is")_
42+
43+
Not very interesting yet, but we can easily add retrievers, dynamic system prompts and structured responses to build more powerful agents.
44+
1345
## Example — Retrievers and Dependency Injection
1446

15-
Partial example of using retrievers to help an LLM respond to a user's query about the weather:
47+
Small but complete example of using PydanticAI to build a support agent for a bank.
48+
49+
```py title="bank_support.py"
50+
from dataclasses import dataclass
1651

17-
```py title="weather_agent.py"
18-
import httpx
52+
from pydantic import BaseModel, Field
1953

2054
from pydantic_ai import Agent, CallContext
2155

22-
weather_agent = Agent( # (1)!
56+
from bank_database import DatabaseConn
57+
58+
59+
@dataclass
60+
class SupportDependencies: # (3)!
61+
customer_id: int
62+
db: DatabaseConn
63+
64+
65+
class SupportResult(BaseModel):
66+
support_advice: str = Field(description='Advice returned to the customer')
67+
block_card: bool = Field(description='Whether to block their')
68+
risk: int = Field(description='Risk level of query', ge=0, le=10)
69+
70+
71+
support_agent = Agent( # (1)!
2372
'openai:gpt-4o', # (2)!
24-
deps_type=httpx.AsyncClient, # (3)!
25-
system_prompt='Be concise, reply with one sentence.', # (4)!
73+
deps_type=SupportDependencies,
74+
result_type=SupportResult, # (9)!
75+
system_prompt=( # (4)!
76+
'You are a support agent in our bank, give the '
77+
'customer support and judge the risk level of their query. '
78+
"Reply using the customer's name."
79+
),
2680
)
2781

2882

29-
@weather_agent.retriever_context # (5)!
30-
async def get_location(
31-
ctx: CallContext[httpx.AsyncClient],
32-
location_description: str,
33-
) -> dict[str, float]:
34-
"""Get the latitude and longitude of a location by its description.""" # (6)!
35-
response = await ctx.deps.get('https://api.geolocation...')
36-
...
37-
38-
39-
@weather_agent.retriever_context # (7)!
40-
async def get_weather(
41-
ctx: CallContext[httpx.AsyncClient],
42-
lat: float,
43-
lng: float,
44-
) -> dict[str, str]:
45-
"""Get the weather at a location by its latitude and longitude."""
46-
response = await ctx.deps.get('https://api.weather...')
47-
...
48-
49-
50-
async def main():
51-
async with httpx.AsyncClient() as client:
52-
result = await weather_agent.run( # (8)!
53-
'What is the weather like in West London and in Wiltshire?',
54-
deps=client,
55-
)
56-
print(result.data) # (9)!
57-
#> The weather in West London is raining, while in Wiltshire it is sunny.
58-
59-
messages = result.all_messages() # (10)!
83+
@support_agent.system_prompt # (5)!
84+
async def add_customer_name(ctx: CallContext[SupportDependencies]) -> str:
85+
customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id)
86+
return f"The customer's name is {customer_name!r}"
87+
88+
89+
@support_agent.retriever_context # (6)!
90+
async def customer_balance(
91+
ctx: CallContext[SupportDependencies], include_pending: bool
92+
) -> str:
93+
"""Returns the customer's current account balance.""" # (7)!
94+
balance = await ctx.deps.db.customer_balance(
95+
id=ctx.deps.customer_id,
96+
include_pending=include_pending,
97+
)
98+
return f'${balance:.2f}'
99+
100+
101+
... # (11)!
102+
103+
104+
deps = SupportDependencies(customer_id=123, db=DatabaseConn())
105+
result = support_agent.run_sync('What is my balance?', deps=deps) # (8)!
106+
print(result.data) # (10)!
107+
"""
108+
support_advice='Hello John, your current account balance, including pending transactions, is $123.45.' block_card=False risk=1
109+
"""
110+
111+
result = support_agent.run_sync('I just lost my card!', deps=deps)
112+
print(result.data)
113+
"""
114+
support_advice="I'm sorry to hear that, John. We are temporarily blocking your card to prevent unauthorized transactions." block_card=True risk=8
115+
"""
60116
```
61117

62-
1. An agent that can tell users about the weather in a particular location. Agents combine a system prompt, a response type (here `str`) and "retrievers" (aka tools).
63-
2. Here we configure the agent to use OpenAI's GPT-4o model, you can also customise the model when running the agent.
64-
3. We specify the type dependencies for the agent, in this case an HTTP client, which retrievers will use to make requests to external services. PydanticAI's system of dependency injection provides a powerful, type safe way to customise the behaviour of your agents, including for unit tests and evals.
65-
4. Static system prompts can be registered as key word arguments to the agent, dynamic system prompts can be registered with the `@agent.system_prompot` decorator and benefit from dependency injection.
66-
5. Retrievers let you register "tools" which the LLM may call while to respond to a user. You inject dependencies into the retriever with `CallContext`, any other arguments become the tool schema passed to the LLM, Pydantic is used to validate these arguments, errors are passed back to the LLM so it can retry.
67-
6. This docstring is also passed to the LLM as a description of the tool.
68-
7. Multiple retrievers can be registered with the same agent, the LLM can choose which (if any) retrievers to call in order to respond to a user.
69-
8. Run the agent asynchronously, conducting a conversation with the LLM until a final response is reached. You can also run agents synchronously with `run_sync`. Internally agents are all async, so `run_sync` is a helper using `asyncio.run` to call `run()`.
70-
9. The response from the LLM, in this case a `str`, Agents are generic in both the type of `deps` and `result_type`, so calls are typed end-to-end.
71-
10. [`result.all_messages()`](message-history.md) includes details of messages exchanged, this is useful both to understand the conversation that took place and useful if you want to continue the conversation later — messages can be passed back to later `run/run_sync` calls.
118+
1. An [agent](agents.md) that acts as first-tier support in a bank, agents are generic in the type of dependencies they take and the type of result they return, in this case `Deps` and `SupportResult`.
119+
2. Here we configure the agent to use [OpenAI's GPT-4o model](api/models/openai.md), you can also customise the model when running the agent.
120+
3. The `SupportDependencies` dataclass is used to pass data and connections into the model that will be needed when running [system prompts](agents.md#system-prompts) and [retrievers](agents.md#retrievers). PydanticAI's system of dependency injection provides a powerful, type safe way to customise the behaviour of your agents, including for unit tests and evals.
121+
4. Static [system prompts](agents.md#system-prompts) can be registered as keyword arguments to the agent
122+
5. dynamic [system prompts](agents.md#system-prompts) can be registered with the `@agent.system_prompot` decorator and benefit from dependency injection.
123+
6. [Retrievers](agents.md#retrievers) let you register "tools" which the LLM may call while responding to a user. You inject dependencies into the retriever with [`CallContext`][pydantic_ai.dependencies.CallContext], any other arguments become the tool schema passed to the LLM, Pydantic is used to validate these arguments, errors are passed back to the LLM so it can retry.
124+
7. The docstring is also passed to the LLM as a description of the tool.
125+
8. [Run the agent](agents.md#running-agents) synchronously, conducting a conversation with the LLM until a final response is reached.
126+
9. The response from the agent will, be guaranteed to be a `SupportResult`, if validation fails [reflection](agents.md#reflection-and-self-correction) will mean the agent is prompted to try again.
127+
10. The result will be validated with Pydantic to guarantee it is a `SupportResult`, since the agent is generic, it'll also be typed as a `SupportResult` to aid with static type checking.
128+
11. In real use case, you'd add many more retrievers to the agent to extend the context it's equipped with and support it can provide.
72129

73-
!!! tip "Complete `weather_agent.py` example"
74-
This example is incomplete for the sake of brevity; you can find a complete `weather_agent.py` example [here](examples/weather-agent.md).
130+
!!! tip "Complete `bank_support.py` example"
131+
This example is incomplete for the sake of brevity (the definition of `DatabaseConn` is missing); you can find a complete `bank_support.py` example [here](examples/bank-support.md).
75132

76-
## Example — Result Validation
133+
## Next Steps
77134

78-
TODO
135+
To try PydanticAI yourself, follow instructions [in examples](examples/index.md).

docs/install.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,4 +36,6 @@ To use Logfire with PydanticAI, install PydanticAI with the `logfire` optional g
3636

3737
From there, follow the [Logfire documentation](https://logfire.pydantic.dev/docs/) to configure Logfire.
3838

39-
TODO screenshot of Logfire with PydanticAI in action.
39+
## Next Steps
40+
41+
To run PydanticAI, follow instructions [in examples](examples/index.md).

mkdocs.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ nav:
2323
- examples/index.md
2424
- examples/pydantic-model.md
2525
- examples/weather-agent.md
26+
- examples/bank-support.md
2627
- examples/sql-gen.md
2728
- examples/rag.md
2829
- examples/stream-markdown.md

0 commit comments

Comments
 (0)