Skip to content

Commit 320f5ce

Browse files
Various docs improvements (#809)
1 parent a109814 commit 320f5ce

File tree

6 files changed

+52
-17
lines changed

6 files changed

+52
-17
lines changed

docs/index.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -13,32 +13,32 @@ We built PydanticAI with one simple aim: to bring that FastAPI feeling to GenAI
1313

1414
## Why use PydanticAI
1515

16-
:material-account-group:{ .md .middle .team-blue }&nbsp;<strong class="vertical-middle">Built by the Pydantic Team</strong><br>
16+
* __Built by the Pydantic Team__:
1717
Built by the team behind [Pydantic](https://docs.pydantic.dev/latest/) (the validation layer of the OpenAI SDK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more).
1818

19-
:fontawesome-solid-shapes:{ .md .middle .shapes-orange }&nbsp;<strong class="vertical-middle">Model-agnostic</strong><br>
19+
* __Model-agnostic__:
2020
Supports OpenAI, Anthropic, Gemini, Deepseek, Ollama, Groq, Cohere, and Mistral, and there is a simple interface to implement support for [other models](models.md).
2121

22-
:logfire-logo:{ .md .middle }&nbsp;<strong class="vertical-middle">Pydantic Logfire Integration</strong><br>
22+
* __Pydantic Logfire Integration__:
2323
Seamlessly [integrates](logfire.md) with [Pydantic Logfire](https://pydantic.dev/logfire) for real-time debugging, performance monitoring, and behavior tracking of your LLM-powered applications.
2424

25-
:material-shield-check:{ .md .middle .secure-green }&nbsp;<strong class="vertical-middle">Type-safe</strong><br>
25+
* __Type-safe__:
2626
Designed to make [type checking](agents.md#static-type-checking) as powerful and informative as possible for you.
2727

28-
:snake:{ .md .middle }&nbsp;<strong class="vertical-middle">Python-centric Design</strong><br>
28+
* __Python-centric Design__:
2929
Leverages Python's familiar control flow and agent composition to build your AI-driven projects, making it easy to apply standard Python best practices you'd use in any other (non-AI) project.
3030

31-
:simple-pydantic:{ .md .middle .pydantic-pink }&nbsp;<strong class="vertical-middle">Structured Responses</strong><br>
31+
* __Structured Responses__:
3232
Harnesses the power of [Pydantic](https://docs.pydantic.dev/latest/) to [validate and structure](results.md#structured-result-validation) model outputs, ensuring responses are consistent across runs.
3333

34-
:material-puzzle-plus:{ .md .middle .puzzle-purple }&nbsp;<strong class="vertical-middle">Dependency Injection System</strong><br>
34+
* __Dependency Injection System__:
3535
Offers an optional [dependency injection](dependencies.md) system to provide data and services to your agent's [system prompts](agents.md#system-prompts), [tools](tools.md) and [result validators](results.md#result-validators-functions).
3636
This is useful for testing and eval-driven iterative development.
3737

38-
:material-sine-wave:{ .md .middle }&nbsp;<strong class="vertical-middle">Streamed Responses</strong><br>
38+
* __Streamed Responses__:
3939
Provides the ability to [stream](results.md#streamed-results) LLM outputs continuously, with immediate validation, ensuring rapid and accurate results.
4040

41-
:material-graph:{ .md .middle .graph-green }&nbsp;<strong class="vertical-middle">Graph Support</strong><br>
41+
* __Graph Support__:
4242
[Pydantic Graph](graph.md) provides a powerful way to define graphs using typing hints, this is useful in complex applications where standard control flow can degrade to spaghetti code.
4343

4444
!!! example "In Beta"

docs/logfire.md

Lines changed: 21 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,9 @@ import logfire
5959
logfire.configure()
6060
```
6161

62-
The [logfire documentation](https://logfire.pydantic.dev/docs/) has more details on how to use logfire, including how to instrument other libraries like Pydantic, HTTPX and FastAPI.
62+
The [logfire documentation](https://logfire.pydantic.dev/docs/) has more details on how to use logfire,
63+
including how to instrument other libraries like [Pydantic](https://logfire.pydantic.dev/docs/integrations/pydantic/),
64+
[HTTPX](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) and [FastAPI](https://logfire.pydantic.dev/docs/integrations/web-frameworks/fastapi/).
6365

6466
Since Logfire is build on [OpenTelemetry](https://opentelemetry.io/), you can use the Logfire Python SDK to send data to any OpenTelemetry collector.
6567

@@ -79,3 +81,21 @@ To demonstrate how Logfire can let you visualise the flow of a PydanticAI run, h
7981
We can also query data with SQL in Logfire to monitor the performance of an application. Here's a real world example of using Logfire to monitor PydanticAI runs inside Logfire itself:
8082

8183
![Logfire monitoring PydanticAI](img/logfire-monitoring-pydanticai.png)
84+
85+
### Monitoring HTTPX Requests
86+
87+
In order to monitor HTTPX requests made by models, you can use `logfire`'s [HTTPX](https://logfire.pydantic.dev/docs/integrations/http-clients/httpx/) integration.
88+
89+
Instrumentation is as easy as adding the following three lines to your application:
90+
91+
```py {title="instrument_httpx.py" test="skip" lint="skip"}
92+
...
93+
import logfire
94+
logfire.configure() # (1)!
95+
logfire.instrument_httpx() # (2)!
96+
...
97+
```
98+
```
99+
100+
In particular, this can help you to trace specific requests, responses, and headers which might be of particular interest
101+
if you're using a custom `httpx` client in your model.

docs/message-history.md

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ print(result1.data)
166166

167167
result2 = agent.run_sync('Explain?', message_history=result1.new_messages())
168168
print(result2.data)
169-
#> This is an excellent joke invent by Samuel Colvin, it needs no explanation.
169+
#> This is an excellent joke invented by Samuel Colvin, it needs no explanation.
170170

171171
print(result2.all_messages())
172172
"""
@@ -210,7 +210,7 @@ print(result2.all_messages())
210210
ModelResponse(
211211
parts=[
212212
TextPart(
213-
content='This is an excellent joke invent by Samuel Colvin, it needs no explanation.',
213+
content='This is an excellent joke invented by Samuel Colvin, it needs no explanation.',
214214
part_kind='text',
215215
)
216216
],
@@ -229,7 +229,9 @@ Since messages are defined by simple dataclasses, you can manually create and ma
229229

230230
The message format is independent of the model used, so you can use messages in different agents, or the same agent with different models.
231231

232-
```python
232+
In the example below, we reuse the message from the first agent run, which uses the `openai:gpt-4o` model, in a second agent run using the `google-gla:gemini-1.5-pro` model.
233+
234+
```python {title="Reusing messages with a different model" hl_lines="11"}
233235
from pydantic_ai import Agent
234236

235237
agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')
@@ -239,10 +241,12 @@ print(result1.data)
239241
#> Did you hear about the toothpaste scandal? They called it Colgate.
240242

241243
result2 = agent.run_sync(
242-
'Explain?', model='gemini-1.5-pro', message_history=result1.new_messages()
244+
'Explain?',
245+
model='google-gla:gemini-1.5-pro',
246+
message_history=result1.new_messages(),
243247
)
244248
print(result2.data)
245-
#> This is an excellent joke invent by Samuel Colvin, it needs no explanation.
249+
#> This is an excellent joke invented by Samuel Colvin, it needs no explanation.
246250

247251
print(result2.all_messages())
248252
"""
@@ -286,7 +290,7 @@ print(result2.all_messages())
286290
ModelResponse(
287291
parts=[
288292
TextPart(
289-
content='This is an excellent joke invent by Samuel Colvin, it needs no explanation.',
293+
content='This is an excellent joke invented by Samuel Colvin, it needs no explanation.',
290294
part_kind='text',
291295
)
292296
],

docs/troubleshooting.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,3 +19,9 @@ Note: This fix also applies to Google Colab.
1919
### `UserError: API key must be provided or set in the [MODEL]_API_KEY environment variable`
2020

2121
If you're running into issues with setting the API key for your model, visit the [Models](models.md) page to learn more about how to set an environment variable and/or pass in an `api_key` argument.
22+
23+
## Monitoring HTTPX Requests
24+
25+
You can use custom `httpx` clients in your models in order to access specific requests, responses, and headers at runtime.
26+
27+
It's particularly helpful to use `logfire`'s [HTTPX integration](logfire.md#monitoring-httpx-requests) to monitor the above.

pydantic_ai_slim/pydantic_ai/settings.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -80,6 +80,7 @@ class ModelSettings(TypedDict, total=False):
8080
"""Whether to allow parallel tool calls.
8181
8282
Supported by:
83+
8384
* OpenAI (some models, not o1)
8485
* Groq
8586
* Anthropic
@@ -89,6 +90,7 @@ class ModelSettings(TypedDict, total=False):
8990
"""The random seed to use for the model, theoretically allowing for deterministic results.
9091
9192
Supported by:
93+
9294
* OpenAI
9395
* Groq
9496
* Cohere
@@ -99,6 +101,7 @@ class ModelSettings(TypedDict, total=False):
99101
"""Penalize new tokens based on whether they have appeared in the text so far.
100102
101103
Supported by:
104+
102105
* OpenAI
103106
* Groq
104107
* Cohere
@@ -110,6 +113,7 @@ class ModelSettings(TypedDict, total=False):
110113
"""Penalize new tokens based on their existing frequency in the text so far.
111114
112115
Supported by:
116+
113117
* OpenAI
114118
* Groq
115119
* Cohere
@@ -121,6 +125,7 @@ class ModelSettings(TypedDict, total=False):
121125
"""Modify the likelihood of specified tokens appearing in the completion.
122126
123127
Supported by:
128+
124129
* OpenAI
125130
* Groq
126131
"""

tests/test_examples.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,7 @@ def rich_prompt_ask(prompt: str, *_args: Any, **_kwargs: Any) -> str:
179179
'The weather in West London is raining, while in Wiltshire it is sunny.'
180180
),
181181
'Tell me a joke.': 'Did you hear about the toothpaste scandal? They called it Colgate.',
182-
'Explain?': 'This is an excellent joke invent by Samuel Colvin, it needs no explanation.',
182+
'Explain?': 'This is an excellent joke invented by Samuel Colvin, it needs no explanation.',
183183
'What is the capital of France?': 'Paris',
184184
'What is the capital of Italy?': 'Rome',
185185
'What is the capital of the UK?': 'London',

0 commit comments

Comments
 (0)