You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/index.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,32 +13,32 @@ We built PydanticAI with one simple aim: to bring that FastAPI feeling to GenAI
13
13
14
14
## Why use PydanticAI
15
15
16
-
:material-account-group:{ .md .middle .team-blue } <strongclass="vertical-middle">Built by the Pydantic Team</strong><br>
16
+
*__Built by the Pydantic Team__:
17
17
Built by the team behind [Pydantic](https://docs.pydantic.dev/latest/) (the validation layer of the OpenAI SDK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more).
Supports OpenAI, Anthropic, Gemini, Deepseek, Ollama, Groq, Cohere, and Mistral, and there is a simple interface to implement support for [other models](models.md).
Seamlessly [integrates](logfire.md) with [Pydantic Logfire](https://pydantic.dev/logfire) for real-time debugging, performance monitoring, and behavior tracking of your LLM-powered applications.
Leverages Python's familiar control flow and agent composition to build your AI-driven projects, making it easy to apply standard Python best practices you'd use in any other (non-AI) project.
Harnesses the power of [Pydantic](https://docs.pydantic.dev/latest/) to [validate and structure](results.md#structured-result-validation) model outputs, ensuring responses are consistent across runs.
Offers an optional [dependency injection](dependencies.md) system to provide data and services to your agent's [system prompts](agents.md#system-prompts), [tools](tools.md) and [result validators](results.md#result-validators-functions).
36
36
This is useful for testing and eval-driven iterative development.
Provides the ability to [stream](results.md#streamed-results) LLM outputs continuously, with immediate validation, ensuring rapid and accurate results.
[Pydantic Graph](graph.md) provides a powerful way to define graphs using typing hints, this is useful in complex applications where standard control flow can degrade to spaghetti code.
Copy file name to clipboardExpand all lines: docs/logfire.md
+21-1Lines changed: 21 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -59,7 +59,9 @@ import logfire
59
59
logfire.configure()
60
60
```
61
61
62
-
The [logfire documentation](https://logfire.pydantic.dev/docs/) has more details on how to use logfire, including how to instrument other libraries like Pydantic, HTTPX and FastAPI.
62
+
The [logfire documentation](https://logfire.pydantic.dev/docs/) has more details on how to use logfire,
63
+
including how to instrument other libraries like [Pydantic](https://logfire.pydantic.dev/docs/integrations/pydantic/),
64
+
[HTTPX](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) and [FastAPI](https://logfire.pydantic.dev/docs/integrations/web-frameworks/fastapi/).
63
65
64
66
Since Logfire is build on [OpenTelemetry](https://opentelemetry.io/), you can use the Logfire Python SDK to send data to any OpenTelemetry collector.
65
67
@@ -79,3 +81,21 @@ To demonstrate how Logfire can let you visualise the flow of a PydanticAI run, h
79
81
We can also query data with SQL in Logfire to monitor the performance of an application. Here's a real world example of using Logfire to monitor PydanticAI runs inside Logfire itself:
In order to monitor HTTPX requests made by models, you can use `logfire`'s [HTTPX](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) integration.
88
+
89
+
Instrumentation is as easy as adding the following three lines to your application:
#> This is an excellent joke invent by Samuel Colvin, it needs no explanation.
169
+
#> This is an excellent joke invented by Samuel Colvin, it needs no explanation.
170
170
171
171
print(result2.all_messages())
172
172
"""
@@ -210,7 +210,7 @@ print(result2.all_messages())
210
210
ModelResponse(
211
211
parts=[
212
212
TextPart(
213
-
content='This is an excellent joke invent by Samuel Colvin, it needs no explanation.',
213
+
content='This is an excellent joke invented by Samuel Colvin, it needs no explanation.',
214
214
part_kind='text',
215
215
)
216
216
],
@@ -229,7 +229,9 @@ Since messages are defined by simple dataclasses, you can manually create and ma
229
229
230
230
The message format is independent of the model used, so you can use messages in different agents, or the same agent with different models.
231
231
232
-
```python
232
+
In the example below, we reuse the message from the first agent run, which uses the `openai:gpt-4o` model, in a second agent run using the `google-gla:gemini-1.5-pro` model.
233
+
234
+
```python {title="Reusing messages with a different model" hl_lines="11"}
233
235
from pydantic_ai import Agent
234
236
235
237
agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')
@@ -239,10 +241,12 @@ print(result1.data)
239
241
#> Did you hear about the toothpaste scandal? They called it Colgate.
Copy file name to clipboardExpand all lines: docs/troubleshooting.md
+6Lines changed: 6 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,3 +19,9 @@ Note: This fix also applies to Google Colab.
19
19
### `UserError: API key must be provided or set in the [MODEL]_API_KEY environment variable`
20
20
21
21
If you're running into issues with setting the API key for your model, visit the [Models](models.md) page to learn more about how to set an environment variable and/or pass in an `api_key` argument.
22
+
23
+
## Monitoring HTTPX Requests
24
+
25
+
You can use custom `httpx` clients in your models in order to access specific requests, responses, and headers at runtime.
26
+
27
+
It's particularly helpful to use `logfire`'s [HTTPX integration](logfire.md#monitoring-httpx-requests) to monitor the above.
0 commit comments