You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+24-8Lines changed: 24 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,14 +33,30 @@ We built PydanticAI with one simple aim: to bring that FastAPI feeling to GenAI
33
33
34
34
## Why use PydanticAI
35
35
36
-
* Built by the team behind Pydantic (the validation layer of the OpenAI SDK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more)
37
-
*[Model-agnostic](https://ai.pydantic.dev/models/) — currently OpenAI, Anthropic, Gemini, Ollama, Groq, and Mistral are supported, and there is a simple interface to implement support for other models.
* Control flow and agent composition is done with vanilla Python, allowing you to make use of the same Python development best practices you'd use in any other (non-AI) project
40
-
*[Structured response](https://ai.pydantic.dev/results/#structured-result-validation) validation with Pydantic
41
-
*[Streamed responses](https://ai.pydantic.dev/results/#streamed-results), including validation of streamed _structured_ responses with Pydantic
42
-
* Novel, type-safe [dependency injection system](https://ai.pydantic.dev/dependencies/), useful for testing and eval-driven iterative development
43
-
*[Logfire integration](https://ai.pydantic.dev/logfire/) for debugging and monitoring the performance and general behavior of your LLM-powered application
36
+
*__Built by the Pydantic Team__
37
+
Built by the team behind [Pydantic](https://docs.pydantic.dev/latest/) (the validation layer of the OpenAI SDK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more).
38
+
39
+
*__Model-agnostic__
40
+
Supports OpenAI, Anthropic, Gemini, Ollama, Groq, and Mistral, and there is a simple interface to implement support for [other models](models.md).
41
+
42
+
*__Pydantic Logfire Integration__
43
+
Seamlessly [integrates](logfire.md) with [Pydantic Logfire](https://pydantic.dev/logfire) for real-time debugging, performance monitoring, and behavior tracking of your LLM-powered applications.
44
+
45
+
*__Type-safe__
46
+
Designed to make type checking as useful as possible for you, so it [integrates](agents.md#static-type-checking) well with static type checkers, like [`mypy`](https://github.com/python/mypy) and [`pyright`](https://github.com/microsoft/pyright).
47
+
48
+
*__Python-centric Design__
49
+
Leverages Python’s familiar control flow and agent composition to build your AI-driven projects, making it easy to apply standard Python best practices you'd use in any other (non-AI) project
50
+
51
+
*__Structured Responses__
52
+
Harnesses the power of [Pydantic](https://docs.pydantic.dev/latest/) to [validate and structure](results.md#structured-result-validation) model outputs, ensuring responses are consistent across runs.
53
+
54
+
*__Dependency Injection System__
55
+
Offers an optional [dependency injection](dependencies.md) system to provide data and services to your agent's [system prompts](agents.md#system-prompts), [tools](tools.md) and [result validators](results.md#result-validators-functions).
56
+
This is useful for testing and eval-driven iterative development.
57
+
58
+
*__Streamed Responses__
59
+
Provides the ability to [stream](results.md#streamed-results) LLM outputs continuously, with immediate validation, ensuring rapid and accurate results.
Copy file name to clipboardExpand all lines: docs/agents.md
+8-5Lines changed: 8 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,11 +7,14 @@ but multiple agents can also interact to embody more complex workflows.
7
7
8
8
The [`Agent`][pydantic_ai.Agent] class has full API documentation, but conceptually you can think of an agent as a container for:
9
9
10
-
* A [system prompt](#system-prompts) — a set of instructions for the LLM written by the developer
11
-
* One or more [function tool](tools.md) — functions that the LLM may call to get information while generating a response
12
-
* An optional structured [result type](results.md) — the structured datatype the LLM must return at the end of a run
13
-
* A [dependency](dependencies.md) type constraint — system prompt functions, tools and result validators may all use dependencies when they're run
14
-
* Agents may optionally also have a default [LLM model](api/models/base.md) associated with them; the model to use can also be specified when running the agent
|[System prompt(s)](#system-prompts)| A set of instructions for the LLM written by the developer. |
13
+
|[Function tool(s)](tools.md)| Functions that the LLM may call to get information while generating a response. |
14
+
|[Structured result type](results.md)| The structured datatype the LLM must return at the end of a run, if specified. |
15
+
|[Dependency type constraint](dependencies.md)| System prompt functions, tools, and result validators may all use dependencies when they're run. |
16
+
|[LLM model](api/models/base.md)| Optional default LLM model associated with the agent. Can also be specified when running the agent. |
17
+
|[Model Settings](#additional-configuration)| Optional default model settings to help fine tune requests. Can also be specified when running the agent.|
15
18
16
19
In typing terms, agents are generic in their dependency and result types, e.g., an agent which required dependencies of type `#!python Foobar` and returned results of type `#!python list[str]` would have type `cAgent[Foobar, list[str]]`. In practice, you shouldn't need to care about this, it should just mean your IDE can tell you when you have the right type, and if you choose to use [static type checking](#static-type-checking) it should work well with PydanticAI.
Copy file name to clipboardExpand all lines: docs/index.md
+29-11Lines changed: 29 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,14 +13,30 @@ We built PydanticAI with one simple aim: to bring that FastAPI feeling to GenAI
13
13
14
14
## Why use PydanticAI
15
15
16
-
* Built by the team behind Pydantic (the validation layer of the OpenAI SDK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more)
17
-
*[Model-agnostic](models.md) — currently OpenAI, Anthropic, Gemini, Ollama, Groq, and Mistral are supported, and there is a simple interface to implement support for other models.
18
-
*[Type-safe](agents.md#static-type-checking)
19
-
* Control flow and agent composition is done with vanilla Python, allowing you to make use of the same Python development best practices you'd use in any other (non-AI) project
20
-
*[Structured response](results.md#structured-result-validation) validation with Pydantic
21
-
*[Streamed responses](results.md#streamed-results), including validation of streamed _structured_ responses with Pydantic
22
-
* Novel, type-safe [dependency injection system](dependencies.md), useful for testing and eval-driven iterative development
23
-
*[Logfire integration](logfire.md) for debugging and monitoring the performance and general behavior of your LLM-powered application
16
+
:material-account-group:{ .md .middle .team-blue } <strongclass="vertical-middle">Built by the Pydantic Team</strong><br>
17
+
Built by the team behind [Pydantic](https://docs.pydantic.dev/latest/) (the validation layer of the OpenAI SDK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more).
Seamlessly [integrates](logfire.md) with [Pydantic Logfire](https://pydantic.dev/logfire) for real-time debugging, performance monitoring, and behavior tracking of your LLM-powered applications.
Designed to make type checking as useful as possible for you, so it [integrates](agents.md#static-type-checking) well with static type checkers, like [`mypy`](https://github.com/python/mypy) and [`pyright`](https://github.com/microsoft/pyright).
Leverages Python’s familiar control flow and agent composition to build your AI-driven projects, making it easy to apply standard Python best practices you'd use in any other (non-AI) project
Harnesses the power of [Pydantic](https://docs.pydantic.dev/latest/) to [validate and structure](results.md#structured-result-validation) model outputs, ensuring responses are consistent across runs.
Offers an optional [dependency injection](dependencies.md) system to provide data and services to your agent's [system prompts](agents.md#system-prompts), [tools](tools.md) and [result validators](results.md#result-validators-functions).
36
+
This is useful for testing and eval-driven iterative development.
Provides the ability to [stream](results.md#streamed-results) LLM outputs continuously, with immediate validation, ensuring rapid and accurate results.
24
40
25
41
!!! example "In Beta"
26
42
PydanticAI is in early beta, the API is still subject to change and there's a lot more to do.
@@ -45,12 +61,14 @@ The first known use of "hello, world" was in a 1974 textbook about the C program
45
61
"""
46
62
```
47
63
48
-
1.Define a very simple agent — here we configure the agent to use [Gemini 1.5's Flash](api/models/gemini.md) model, but you can also set the model when running the agent.
49
-
2. Register a static [system prompt](agents.md#system-prompts) using a keyword argument to the agent. For more complex dynamically-generated system prompts, see the example below.
50
-
3.[Run the agent](agents.md#running-agents) synchronously, conducting a conversation with the LLM. Here the exchange should be very short: PydanticAI will send the system prompt and the user query to the LLM, the model will return a text response.
64
+
1.We configure the agent to use [Gemini 1.5's Flash](api/models/gemini.md) model, but you can also set the model when running the agent.
65
+
2. Register a static [system prompt](agents.md#system-prompts) using a keyword argument to the agent.
66
+
3.[Run the agent](agents.md#running-agents) synchronously, conducting a conversation with the LLM.
51
67
52
68
_(This example is complete, it can be run "as is")_
53
69
70
+
The exchange should be very short: PydanticAI will send the system prompt and the user query to the LLM, the model will return a text response.
71
+
54
72
Not very interesting yet, but we can easily add "tools", dynamic system prompts, and structured responses to build more powerful agents.
0 commit comments