Skip to content

Commit 02f0eba

Browse files
committed
Merge branch 'main' into dmontagu/record-instructions-on-agent-run-span
2 parents 743aadb + f3f40fe commit 02f0eba

File tree

21 files changed

+699
-57
lines changed

21 files changed

+699
-57
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ We built Pydantic AI with one simple aim: to bring that FastAPI feeling to GenAI
3939
[Pydantic Validation](https://docs.pydantic.dev/latest/) is the validation layer of the OpenAI SDK, the Google ADK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more. _Why use the derivative when you can go straight to the source?_ :smiley:
4040

4141
2. **Model-agnostic**:
42-
Supports virtually every [model](https://ai.pydantic.dev/models/overview) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel. If your favorite model or provider is not listed, you can easily implement a [custom model](https://ai.pydantic.dev/models/overview#custom-models).
42+
Supports virtually every [model](https://ai.pydantic.dev/models/overview) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius. If your favorite model or provider is not listed, you can easily implement a [custom model](https://ai.pydantic.dev/models/overview#custom-models).
4343

4444
3. **Seamless Observability**:
4545
Tightly [integrates](https://ai.pydantic.dev/logfire) with [Pydantic Logfire](https://pydantic.dev/logfire), our general-purpose OpenTelemetry observability platform, for real-time debugging, evals-based performance monitoring, and behavior, tracing, and cost tracking. If you already have an observability platform that supports OTel, you can [use that too](https://ai.pydantic.dev/logfire#alternative-observability-backends).

docs/api/providers.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,3 +41,5 @@
4141
::: pydantic_ai.providers.ollama.OllamaProvider
4242

4343
::: pydantic_ai.providers.litellm.LiteLLMProvider
44+
45+
::: pydantic_ai.providers.nebius.NebiusProvider

docs/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ We built Pydantic AI with one simple aim: to bring that FastAPI feeling to GenAI
1414
[Pydantic Validation](https://docs.pydantic.dev/latest/) is the validation layer of the OpenAI SDK, the Google ADK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more. _Why use the derivative when you can go straight to the source?_ :smiley:
1515

1616
2. **Model-agnostic**:
17-
Supports virtually every [model](models/overview.md) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel. If your favorite model or provider is not listed, you can easily implement a [custom model](models/overview.md#custom-models).
17+
Supports virtually every [model](models/overview.md) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius. If your favorite model or provider is not listed, you can easily implement a [custom model](models/overview.md#custom-models).
1818

1919
3. **Seamless Observability**:
2020
Tightly [integrates](logfire.md) with [Pydantic Logfire](https://pydantic.dev/logfire), our general-purpose OpenTelemetry observability platform, for real-time debugging, evals-based performance monitoring, and behavior, tracing, and cost tracking. If you already have an observability platform that supports OTel, you can [use that too](logfire.md#alternative-observability-backends).

docs/models/openai.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -608,3 +608,35 @@ print(result.output)
608608
#> The capital of France is Paris.
609609
...
610610
```
611+
612+
### Nebius AI Studio
613+
614+
Go to [Nebius AI Studio](https://studio.nebius.com/) and create an API key.
615+
616+
Once you've set the `NEBIUS_API_KEY` environment variable, you can run the following:
617+
618+
```python
619+
from pydantic_ai import Agent
620+
621+
agent = Agent('nebius:Qwen/Qwen3-32B-fast')
622+
result = agent.run_sync('What is the capital of France?')
623+
print(result.output)
624+
#> The capital of France is Paris.
625+
```
626+
627+
If you need to configure the provider, you can use the [`NebiusProvider`][pydantic_ai.providers.nebius.NebiusProvider] class:
628+
629+
```python
630+
from pydantic_ai import Agent
631+
from pydantic_ai.models.openai import OpenAIChatModel
632+
from pydantic_ai.providers.nebius import NebiusProvider
633+
634+
model = OpenAIChatModel(
635+
'Qwen/Qwen3-32B-fast',
636+
provider=NebiusProvider(api_key='your-nebius-api-key'),
637+
)
638+
agent = Agent(model)
639+
result = agent.run_sync('What is the capital of France?')
640+
print(result.output)
641+
#> The capital of France is Paris.
642+
```

docs/models/overview.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ In addition, many providers are compatible with the OpenAI API, and can be used
2828
- [GitHub Models](openai.md#github-models)
2929
- [Cerebras](openai.md#cerebras)
3030
- [LiteLLM](openai.md#litellm)
31+
- [Nebius AI Studio](openai.md#nebius-ai-studio)
3132

3233
Pydantic AI also comes with [`TestModel`](../api/models/test.md) and [`FunctionModel`](../api/models/function.md)
3334
for testing and development.

mkdocs.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -316,7 +316,7 @@ plugins:
316316
Graphs:
317317
- graph.md
318318
API Reference:
319-
- api/*/*.md
319+
- api/*.md
320320
Evals:
321321
- evals.md
322322
Durable Execution:
@@ -328,6 +328,7 @@ plugins:
328328
- cli.md
329329
- logfire.md
330330
- contributing.md
331+
Examples:
331332
- examples/*.md
332333

333334
# DON'T PUT REDIRECTS IN THIS FILE! Instead add them to docs-site/src/index.ts

pydantic_ai_slim/pydantic_ai/_parts_manager.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -312,6 +312,7 @@ def handle_tool_call_part(
312312
tool_name: str,
313313
args: str | dict[str, Any] | None,
314314
tool_call_id: str | None = None,
315+
id: str | None = None,
315316
) -> ModelResponseStreamEvent:
316317
"""Immediately create or fully-overwrite a ToolCallPart with the given information.
317318
@@ -323,6 +324,7 @@ def handle_tool_call_part(
323324
tool_name: The name of the tool being invoked.
324325
args: The arguments for the tool call, either as a string, a dictionary, or None.
325326
tool_call_id: An optional string identifier for this tool call.
327+
id: An optional identifier for this tool call part.
326328
327329
Returns:
328330
ModelResponseStreamEvent: A `PartStartEvent` indicating that a new tool call part
@@ -332,6 +334,7 @@ def handle_tool_call_part(
332334
tool_name=tool_name,
333335
args=args,
334336
tool_call_id=tool_call_id or _generate_tool_call_id(),
337+
id=id,
335338
)
336339
if vendor_part_id is None:
337340
# vendor_part_id is None, so we unconditionally append a new ToolCallPart to the end of the list

pydantic_ai_slim/pydantic_ai/durable_exec/temporal/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,8 @@ def configure_worker(self, config: WorkerConfig) -> WorkerConfig:
6262
'logfire',
6363
'rich',
6464
'httpx',
65+
'anyio',
66+
'httpcore',
6567
# Imported inside `logfire._internal.json_encoder` when running `logfire.info` inside an activity with attributes to serialize
6668
'attrs',
6769
# Imported inside `logfire._internal.json_schema` when running `logfire.info` inside an activity with attributes to serialize

pydantic_ai_slim/pydantic_ai/messages.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1052,6 +1052,13 @@ class BaseToolCallPart:
10521052
In case the tool call id is not provided by the model, Pydantic AI will generate a random one.
10531053
"""
10541054

1055+
_: KW_ONLY
1056+
1057+
id: str | None = None
1058+
"""An optional identifier of the tool call part, separate from the tool call ID.
1059+
1060+
This is used by some APIs like OpenAI Responses."""
1061+
10551062
def args_as_dict(self) -> dict[str, Any]:
10561063
"""Return the arguments as a Python dictionary.
10571064

pydantic_ai_slim/pydantic_ai/models/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -691,6 +691,7 @@ def infer_model(model: Model | KnownModelName | str) -> Model: # noqa: C901
691691
'together',
692692
'vercel',
693693
'litellm',
694+
'nebius',
694695
):
695696
from .openai import OpenAIChatModel
696697

0 commit comments

Comments
 (0)