You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Pydantic AI is a Python agent framework designed to make it less painful to build production grade applications with Generative AI.
27
+
### <em>Pydantic AI is a Python agent framework designed to help you quickly, confidently, and painlessly build production grade applications and workflows with Generative AI.</em>
28
28
29
-
FastAPI revolutionized web development by offering an innovative and ergonomic design, built on the foundation of [Pydantic Validation](https://docs.pydantic.dev).
30
29
31
-
Similarly, virtually every agent framework and LLM library in Python uses Pydantic Validation, yet when we began to use LLMs in [Pydantic Logfire](https://pydantic.dev/logfire), we couldn't find anything that gave us the same feeling.
30
+
FastAPI revolutionized web development by offering an innovative and ergonomic design, built on the foundation of [Pydantic Validation](https://docs.pydantic.dev) and modern Python features like type hints.
32
31
33
-
We built Pydantic AI with one simple aim: to bring that FastAPI feeling to GenAI app development.
32
+
Yet despite virtually every Python agent framework and LLM library using Pydantic Validation, when we began to use LLMs in [Pydantic Logfire](https://pydantic.dev/logfire), we couldn't find anything that gave us the same feeling.
33
+
34
+
We built Pydantic AI with one simple aim: to bring that FastAPI feeling to GenAI app and agent development.
34
35
35
36
## Why use Pydantic AI
36
37
37
-
-**Built by the Pydantic Team**
38
-
Built by the team behind [Pydantic Validation](https://docs.pydantic.dev/latest/) (the validation layer of the OpenAI SDK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more).
38
+
1.**Built by the Pydantic Team**:
39
+
[Pydantic Validation](https://docs.pydantic.dev/latest/) is the validation layer of the OpenAI SDK, the Google ADK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more. _Why use the derivative when you can go straight to the source?_:smiley:
40
+
41
+
2.**Model-agnostic**:
42
+
Supports virtually every [model](https://ai.pydantic.dev/models/overview) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel. If your favorite model or provider is not listed, you can easily implement a [custom model](https://ai.pydantic.dev/models/overview#custom-models).
43
+
44
+
3.**Seamless Observability**:
45
+
Tightly [integrates](https://ai.pydantic.dev/logfire) with [Pydantic Logfire](https://pydantic.dev/logfire), our general-purpose OpenTelemetry observability platform, for real-time debugging, evals-based performance monitoring, and behavior, tracing, and cost tracking. If you already have an observability platform that supports OTel, you can [use that too](https://ai.pydantic.dev/logfire#alternative-observability-backends).
39
46
40
-
-**Model-agnostic**
41
-
Supports OpenAI, Anthropic, Gemini, Deepseek, Ollama, Groq, Cohere, and Mistral, and there is a simple interface to implement support for [other models](https://ai.pydantic.dev/models/).
47
+
4.**Fully Type-safe**:
48
+
Designed to give your IDE or AI coding agent as much context as possible for auto-completion and [type checking](https://ai.pydantic.dev/agents#static-type-checking), moving entire classes of errors from runtime to write-time for a bit of that Rust "if it compiles, it works" feel.
42
49
43
-
-**Pydantic Logfire Integration**
44
-
Seamlessly [integrates](https://ai.pydantic.dev/logfire/) with [Pydantic Logfire](https://pydantic.dev/logfire) for real-time debugging, performance monitoring, and behavior tracking of your LLM-powered applications.
50
+
5.**Powerful Evals**:
51
+
Enables you to systematically test and [evaluate](https://ai.pydantic.dev/evals) the performance and accuracy of the agentic systems you build, and monitor the performance over time in Pydantic Logfire.
45
52
46
-
-**Type-safe**
47
-
Designed to make [type checking](https://ai.pydantic.dev/agents/#static-type-checking) as powerful and informative as possible for you.
53
+
6.**MCP, A2A, and AG-UI**:
54
+
Integrates the [Model Context Protocol](https://ai.pydantic.dev/mcp/client), [Agent2Agent](https://ai.pydantic.dev/a2a), and [AG-UI](https://ai.pydantic.dev/ag-ui) standards to give your agent access to external tools and data, let it interoperate with other agents, and build interactive applications with streaming event-based communication.
48
55
49
-
-**Python-centric Design**
50
-
Leverages Python's familiar control flow and agent composition to build your AI-driven projects, making it easy to apply standard Python best practices you'd use in any other (non-AI) project.
56
+
7.**Human-in-the-Loop Tool Approval**:
57
+
Easily lets you flag that certain tool calls [require approval](https://ai.pydantic.dev/deferred-tools#human-in-the-loop-tool-approval) before they can proceed, possibly depending on tool call arguments, conversation history, or user preferences.
51
58
52
-
-**Structured Responses**
53
-
Harnesses the power of [Pydantic Validation](https://docs.pydantic.dev/latest/) to [validate and structure](https://ai.pydantic.dev/output/#structured-output) model outputs, ensuring responses are consistent across runs.
59
+
8.**Durable Execution**:
60
+
Enables you to build [durable agents](https://ai.pydantic.dev/temporal) that can preserve their progress across transient API failures and application errors or restarts, and handle long-running, asynchronous, and human-in-the-loop workflows with production-grade reliability.
54
61
55
-
-**Dependency Injection System**
56
-
Offers an optional [dependency injection](https://ai.pydantic.dev/dependencies/) system to provide data and services to your agent's [system prompts](https://ai.pydantic.dev/agents/#system-prompts), [tools](https://ai.pydantic.dev/tools/) and [output validators](https://ai.pydantic.dev/output/#output-validator-functions).
57
-
This is useful for testing and eval-driven iterative development.
62
+
9.**Streamed Outputs**:
63
+
Provides the ability to [stream](https://ai.pydantic.dev/output#streamed-results) structured output continuously, with immediate validation, ensuring real time access to generated data.
58
64
59
-
-**Streamed Responses**
60
-
Provides the ability to [stream](https://ai.pydantic.dev/output/#streamed-results) LLM outputs continuously, with immediate validation, ensuring rapid and accurate outputs.
65
+
10.**Graph Support**:
66
+
Provides a powerful way to define [graphs](https://ai.pydantic.dev/graph) using type hints, for use in complex applications where standard control flow can degrade to spaghetti code.
61
67
62
-
-**Graph Support**
63
-
[Pydantic Graph](https://ai.pydantic.dev/graph) provides a powerful way to define graphs using typing hints, this is useful in complex applications where standard control flow can degrade to spaghetti code.
68
+
Realistically though, no list is going to be as convincing as [giving it a try](#next-steps) and seeing how it makes you feel!
64
69
65
70
## Hello World Example
66
71
@@ -71,25 +76,25 @@ from pydantic_ai import Agent
71
76
72
77
# Define a very simple agent including the model to use, you can also set the model when running the agent.
73
78
agent = Agent(
74
-
'google-gla:gemini-1.5-flash',
75
-
# Register a static system prompt using a keyword argument to the agent.
76
-
# For more complex dynamically-generated system prompts, see the example below.
77
-
system_prompt='Be concise, reply with one sentence.',
79
+
'anthropic:claude-sonnet-4-0',
80
+
# Register static instructions using a keyword argument to the agent.
81
+
# For more complex dynamically-generated instructions, see the example below.
82
+
instructions='Be concise, reply with one sentence.',
78
83
)
79
84
80
85
# Run the agent synchronously, conducting a conversation with the LLM.
81
-
# Here the exchange should be very short: Pydantic AI will send the system prompt and the user query to the LLM,
82
-
# the model will return a text response. See below for a more complex run.
83
86
result = agent.run_sync('Where does "hello world" come from?')
84
87
print(result.output)
85
88
"""
86
89
The first known use of "hello, world" was in a 1974 textbook about the C programming language.
87
90
"""
88
91
```
89
92
90
-
_(This example is complete, it can be run "as is")_
93
+
_(This example is complete, it can be run "as is", assuming you've [installed the `pydantic_ai` package](https://ai.pydantic.dev/install))_
91
94
92
-
Not very interesting yet, but we can easily add "tools", dynamic system prompts, and structured responses to build more powerful agents.
95
+
The exchange will be very short: Pydantic AI will send the instructions and the user prompt to the LLM, and the model will return a text response.
96
+
97
+
Not very interesting yet, but we can easily add [tools](https://ai.pydantic.dev/tools), [dynamic instructions](https://ai.pydantic.dev/agents#instructions), and [structured outputs](https://ai.pydantic.dev/output) to build more powerful agents.
93
98
94
99
## Tools & Dependency Injection Example
95
100
@@ -107,14 +112,14 @@ from bank_database import DatabaseConn
107
112
108
113
109
114
# SupportDependencies is used to pass data, connections, and logic into the model that will be needed when running
110
-
#system prompt and tool functions. Dependency injection provides a type-safe way to customise the behavior of your agents.
115
+
#instructions and tool functions. Dependency injection provides a type-safe way to customise the behavior of your agents.
111
116
@dataclass
112
117
classSupportDependencies:
113
118
customer_id: int
114
119
db: DatabaseConn
115
120
116
121
117
-
# This pydantic model defines the structure of the output returned by the agent.
122
+
# This Pydantic model defines the structure of the output returned by the agent.
118
123
classSupportOutput(BaseModel):
119
124
support_advice: str= Field(description='Advice returned to the customer')
120
125
block_card: bool= Field(description="Whether to block the customer's card")
@@ -125,28 +130,28 @@ class SupportOutput(BaseModel):
125
130
# Agents are generic in the type of dependencies they accept and the type of output they return.
126
131
# In this case, the support agent has type `Agent[SupportDependencies, SupportOutput]`.
127
132
support_agent = Agent(
128
-
'openai:gpt-4o',
133
+
'openai:gpt-5',
129
134
deps_type=SupportDependencies,
130
135
# The response from the agent will, be guaranteed to be a SupportOutput,
131
136
# if validation fails the agent is prompted to try again.
132
137
output_type=SupportOutput,
133
-
system_prompt=(
138
+
instructions=(
134
139
'You are a support agent in our bank, give the '
135
140
'customer support and judge the risk level of their query.'
136
141
),
137
142
)
138
143
139
144
140
-
# Dynamic system prompts can make use of dependency injection.
145
+
# Dynamic instructions can make use of dependency injection.
141
146
# Dependencies are carried via the `RunContext` argument, which is parameterized with the `deps_type` from above.
142
147
# If the type annotation here is wrong, static type checkers will catch it.
# `tool` let you register functions which the LLM may call while responding to a user.
154
+
#The `tool` decorator let you register functions which the LLM may call while responding to a user.
150
155
# Again, dependencies are carried via `RunContext`, any other arguments become the tool schema passed to the LLM.
151
156
# Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.
152
157
@support_agent.tool
@@ -187,8 +192,10 @@ async def main():
187
192
188
193
## Next Steps
189
194
190
-
To try Pydantic AI yourself, follow the instructions [in the examples](https://ai.pydantic.dev/examples/).
195
+
To try Pydantic AI for yourself, [install it](https://ai.pydantic.dev/install) and follow the instructions [in the examples](https://ai.pydantic.dev/examples/setup).
191
196
192
197
Read the [docs](https://ai.pydantic.dev/agents/) to learn more about building applications with Pydantic AI.
193
198
194
199
Read the [API Reference](https://ai.pydantic.dev/api/agent/) to understand Pydantic AI's interface.
200
+
201
+
Join [Slack](https://logfire.pydantic.dev/docs/join-slack/) or file an issue on [GitHub](https://github.com/pydantic/pydantic-ai/issues) if you have any questions.
|[System prompt(s)](#system-prompts)| A set of instructions for the LLM written by the developer. |
12
+
|[Instructions](#instructions)| A set of instructions for the LLM written by the developer. |
13
13
|[Function tool(s)](tools.md) and [toolsets](toolsets.md)| Functions that the LLM may call to get information while generating a response. |
14
14
|[Structured output type](output.md)| The structured datatype the LLM must return at the end of a run, if specified. |
15
-
|[Dependency type constraint](dependencies.md)|System prompt functions, tools, and output validators may all use dependencies when they're run. |
15
+
|[Dependency type constraint](dependencies.md)|Dynamic instructions functions, tools, and output functions may all use dependencies when they're run. |
16
16
|[LLM model](api/models/base.md)| Optional default LLM model associated with the agent. Can also be specified when running the agent. |
17
17
|[Model Settings](#additional-configuration)| Optional default model settings to help fine tune requests. Can also be specified when running the agent. |
18
18
@@ -928,10 +928,10 @@ Note that returning an empty string will result in no instruction message added.
928
928
929
929
Validation errors from both function tool parameter validation and [structured output validation](output.md#structured-output) can be passed back to the model with a request to retry.
930
930
931
-
You can also raise [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] from within a [tool](tools.md) or [output validator function](output.md#output-validator-functions) to tell the model it should retry generating a response.
931
+
You can also raise [`ModelRetry`][pydantic_ai.exceptions.ModelRetry] from within a [tool](tools.md) or [output function](output.md#output-functions) to tell the model it should retry generating a response.
932
932
933
-
- The default retry count is **1** but can be altered for the [entire agent][pydantic_ai.Agent.__init__], a [specific tool][pydantic_ai.Agent.tool], or an [output validator][pydantic_ai.Agent.__init__].
934
-
- You can access the current retry count from within a tool or output validator via [`ctx.retry`][pydantic_ai.tools.RunContext].
933
+
- The default retry count is **1** but can be altered for the [entire agent][pydantic_ai.Agent.__init__], a [specific tool][pydantic_ai.Agent.tool], or [outputs][pydantic_ai.Agent.__init__].
934
+
- You can access the current retry count from within a tool or output function via [`ctx.retry`][pydantic_ai.tools.RunContext].
0 commit comments