You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/index.md
+68-28Lines changed: 68 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,30 +2,30 @@
2
2
3
3
--8<-- "docs/.partials/index-header.html"
4
4
5
-
When I first found FastAPI, I got it immediately, I was excited to find something so genuinely innovative and yet ergonomic built on Pydantic.
5
+
When I first found FastAPI, I got it immediately. I was excited to find something so innovative and ergonomic built on Pydantic.
6
6
7
-
Virtually every Agent Framework and LLM library in Python uses Pydantic, but when we came to use Gen AI in [Pydantic Logfire](https://pydantic.dev/logfire), I couldn't find anything that gave me the same feeling.
7
+
Virtually every Agent Framework and LLM library in Python uses Pydantic, but when we began to use LLMs in [Pydantic Logfire](https://pydantic.dev/logfire), I couldn't find anything that gave me the same feeling.
8
8
9
9
PydanticAI is a Python Agent Framework designed to make it less painful to build production grade applications with Generative AI.
10
10
11
11
## Why use PydanticAI
12
12
13
-
* Built by the team behind Pydantic (the validation layer of the OpenAI SDK, the Anthropic SDK, Langchain, LlamaIndex, AutoGPT, Transformers, Instructor and many more)
14
-
*Multi-model — currently with OpenAI and Gemini are support, Anthropic [coming soon](https://github.com/pydantic/pydantic-ai/issues/63), simply interface to implement other models or adapt existing ones
13
+
* Built by the team behind Pydantic (the validation layer of the OpenAI SDK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, Instructor and many more)
14
+
*Model-agnostic — currently both OpenAI and Gemini are supported, and Anthropic [is coming soon](https://github.com/pydantic/pydantic-ai/issues/63). And there is a simple interface to implement and customize support for other models.
15
15
* Type-safe
16
-
*Built on tried and tested best practices in Python
16
+
*Control flow and composing agents is done with vanilla python, allowing you to make use of the same Python development best practices you'd use in any other (non-AI) project
17
17
*[Structured response](results.md#structured-result-validation) validation with Pydantic
18
-
*[Streamed responses](results.md#streamed-results) , including validation of streamed structured responses with Pydantic
*[Logfire integration](logfire.md) for debugging and performance monitoring
18
+
*[Streamed responses](results.md#streamed-results) , including validation of streamed _structured_ responses with Pydantic
19
+
* Novel, type-safe [dependency injection system](dependencies.md), useful for testing and eval-driven iterative development
20
+
*[Logfire integration](logfire.md) for debugging and monitoring the performance and general behavior of your LLM-powered application
21
21
22
22
!!! example "In Beta"
23
-
PydanticAI is in early beta, the API is subject to change and there's a lot more to do.
23
+
PydanticAI is in early beta, the API is still subject to change and there's a lot more to do.
24
24
[Feedback](https://github.com/pydantic/pydantic-ai/issues) is very welcome!
25
25
26
26
## Hello World Example
27
27
28
-
Here's a very minimal example of PydanticAI.
28
+
Here's a minimal example of PydanticAI:
29
29
30
30
```py title="hello_world.py"
31
31
from pydantic_ai import Agent
@@ -42,23 +42,22 @@ The first known use of "hello, world" was in a 1974 textbook about the C program
42
42
"""
43
43
```
44
44
45
-
1. Define a very simple agent, here we configure the agent to use [Gemini 1.5's Flash](api/models/gemini.md) model, you can also set the model when running the agent.
46
-
2.Static [system prompts](agents.md#system-prompts)can be registered as keyword arguments to the agent. For more complex system prompts, see the example below.
47
-
3.[Run the agent](agents.md#running-agents) synchronously, conducting a conversation with the LLM, here the exchange should be very short: PydanticAI will send the system prompt and the user query to the LLM, the model will return a text response.
45
+
1. Define a very simple agent — here we configure the agent to use [Gemini 1.5's Flash](api/models/gemini.md) model, but you can also set the model when running the agent.
46
+
2.Register a static [system prompt](agents.md#system-prompts)using a keyword argument to the agent. For more complex dynamically-generated system prompts, see the example below.
47
+
3.[Run the agent](agents.md#running-agents) synchronously, conducting a conversation with the LLM. Here the exchange should be very short: PydanticAI will send the system prompt and the user query to the LLM, the model will return a text response.
48
48
49
-
4._(This example is complete, it can be run "as is")_
49
+
_(This example is complete, it can be run "as is")_
50
50
51
-
Not very interesting yet, but we can easily add retrievers, dynamic system prompts and structured responses to build more powerful agents.
51
+
Not very interesting yet, but we can easily add "retrievers", dynamic system prompts, and structured responses to build more powerful agents.
52
52
53
53
## Retrievers & Dependency Injection Example
54
54
55
-
Small but complete example of using PydanticAI to build a support agent for a bank.
55
+
Here is a concise example using PydanticAI to build a support agent for a bank:
56
56
57
57
```py title="bank_support.py"
58
58
from dataclasses import dataclass
59
59
60
60
from pydantic import BaseModel, Field
61
-
62
61
from pydantic_ai import Agent, CallContext
63
62
64
63
from bank_database import DatabaseConn
@@ -72,7 +71,7 @@ class SupportDependencies: # (3)!
72
71
73
72
classSupportResult(BaseModel): # (13)!
74
73
support_advice: str= Field(description='Advice returned to the customer')
75
-
block_card: bool= Field(description='Whether to block their')
74
+
block_card: bool= Field(description="Whether to block the customer's card")
76
75
risk: int= Field(description='Risk level of query', ge=0, le=10)
77
76
78
77
@@ -124,27 +123,68 @@ async def main():
124
123
"""
125
124
```
126
125
127
-
1.An[agent](agents.md)that acts as first-tier support in a bank, agents are generic in the type of dependencies they take and the type of result they return, in this case support agent has type `#!python Agent[SupportDependencies, SupportResult]`.
126
+
1.This[agent](agents.md)will act as first-tier support in a bank. Agents are generic in the type of dependencies they accept and the type of result they return. In this case, the support agent has type `#!python Agent[SupportDependencies, SupportResult]`.
128
127
2. Here we configure the agent to use [OpenAI's GPT-4o model](api/models/openai.md), you can also set the model when running the agent.
129
-
3. The `SupportDependencies` dataclass is used to pass data, connections and logic into the model that will be needed when running [system prompts](agents.md#system-prompts) and [retrievers](agents.md#retrievers). PydanticAI's system of dependency injection provides a powerful, typesafe way to customise the behaviour of your agents, including when unit tests and evals.
130
-
4. Static [system prompts](agents.md#system-prompts) can be registered as [keyword arguments][pydantic_ai.Agent.__init__] to the agent.
131
-
5.dynamic[system prompts](agents.md#system-prompts) can be registered with the [`@agent.system_prompt`][pydantic_ai.Agent.system_prompt] decorator and benefit from dependency injection. Dependencies are carried via the [`CallContext`][pydantic_ai.dependencies.CallContext] argument, this is parameterised with the `deps_type` from above, if the type annotation here is wrong, static type checkers will catch it.
132
-
6.[Retrievers](agents.md#retrievers) let you register "tools" which the LLM may call while responding to a user. Again dependencies are carried via [`CallContext`][pydantic_ai.dependencies.CallContext], any other arguments become the tool schema passed to the LLM, Pydantic is used to validate these arguments, errors are passed back to the LLM so it can retry.
133
-
7. The docstring is also passed to the LLM as a description of the tool. Parameter descriptions are [extracted](agents.md#retrievers-tools-and-schema) from the docstring and added to the tool schema sent to the LLM.
134
-
8.[Run the agent](agents.md#running-agents) asynchronously, conducting a conversation with the LLM until a final response is reached. Even in this fair simply case, the agent will exchange multiple messages with the LLM as retrievers are called to each a result.
128
+
3. The `SupportDependencies` dataclass is used to pass data, connections, and logic into the model that will be needed when running [system prompt](agents.md#system-prompts) and [retriever](agents.md#retrievers) functions. PydanticAI's system of dependency injection provides a type-safe way to customise the behaviour of your agents, and can be especially useful when running unit tests and evals.
129
+
4. Static [system prompts](agents.md#system-prompts) can be registered with the [`system_prompt`keyword argument][pydantic_ai.Agent.__init__] to the agent.
130
+
5.Dynamic[system prompts](agents.md#system-prompts) can be registered with the [`@agent.system_prompt`][pydantic_ai.Agent.system_prompt] decorator, and can make use of dependency injection. Dependencies are carried via the [`CallContext`][pydantic_ai.dependencies.CallContext] argument, which is parameterized with the `deps_type` from above. If the type annotation here is wrong, static type checkers will catch it.
131
+
6.[Retrievers](agents.md#retrievers) let you register "tools" which the LLM may call while responding to a user. Again, dependencies are carried via [`CallContext`][pydantic_ai.dependencies.CallContext], and any other arguments become the tool schema passed to the LLM. Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.
132
+
7. The docstring of a retriever also passed to the LLM as a description of the tool. Parameter descriptions are [extracted](agents.md#retrievers-tools-and-schema) from the docstring and added to the tool schema sent to the LLM.
133
+
8.[Run the agent](agents.md#running-agents) asynchronously, conducting a conversation with the LLM until a final response is reached. Even in this fairly simple case, the agent will exchange multiple messages with the LLM as retrievers are called to retrieve a result.
135
134
9. The response from the agent will, be guaranteed to be a `SupportResult`, if validation fails [reflection](agents.md#reflection-and-self-correction) will mean the agent is prompted to try again.
136
135
10. The result will be validated with Pydantic to guarantee it is a `SupportResult`, since the agent is generic, it'll also be typed as a `SupportResult` to aid with static type checking.
137
136
11. In a real use case, you'd add many more retrievers and a longer system prompt to the agent to extend the context it's equipped with and support it can provide.
138
137
12. This is a simple sketch of a database connection, used to keep the example short and readable. In reality, you'd be connecting to an external database (e.g. PostgreSQL) to get information about customers.
139
138
13. This [Pydantic](https://docs.pydantic.dev) model is used to constrain the structured data returned by the agent. From this simple definition, Pydantic builds teh JSON Schema that tells the LLM how to return the data, and performs validation to guarantee the data is correct at the end of the conversation.
140
139
140
+
To help make things more clear, here is a diagram of what is happening in the `#!python await support_agent.run('What is my balance?', deps=deps)` call within `main`:
141
+
```mermaid
142
+
sequenceDiagram
143
+
participant DatabaseConn
144
+
participant Agent
145
+
participant LLM
146
+
147
+
Note over Agent: Dynamic system prompt<br>add_customer_name()
148
+
Agent ->> DatabaseConn: Retrieve customer name
149
+
activate DatabaseConn
150
+
DatabaseConn -->> Agent: "John"
151
+
deactivate DatabaseConn
152
+
153
+
Note over Agent: User query
154
+
155
+
Agent ->> LLM: Request<br>System: "You are a support agent..."<br>System: "The customer's name is John"<br>User: "What is my balance?"
This example is incomplete for the sake of brevity (the definition of `DatabaseConn` is missing); you can find a complete `bank_support.py` example [here](examples/bank-support.md).
182
+
The code included here is incomplete for the sake of brevity (the definition of `DatabaseConn` is missing); you can find the complete `bank_support.py` example [here](examples/bank-support.md).
143
183
144
184
## Next Steps
145
185
146
-
To try PydanticAI yourself, follow instructions [in examples](examples/index.md).
186
+
To try PydanticAI yourself, follow the instructions [in the examples](examples/index.md).
147
187
148
-
Read the conceptual [documentation](agents.md) to learn more about building applications with PydanticAI.
188
+
Read the [conceptual documentation](agents.md) to learn more about building applications with PydanticAI.
149
189
150
190
Read the [API Reference](api/agent.md) to understand PydanticAI's interface.
0 commit comments