Skip to content

Commit e13791f

Browse files
committed
cleaning up after bad merge
1 parent 20a4a6a commit e13791f

File tree

11 files changed

+70
-12
lines changed

11 files changed

+70
-12
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ We built Pydantic AI with one simple aim: to bring that FastAPI feeling to GenAI
3939
[Pydantic Validation](https://docs.pydantic.dev/latest/) is the validation layer of the OpenAI SDK, the Google ADK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more. _Why use the derivative when you can go straight to the source?_ :smiley:
4040

4141
2. **Model-agnostic**:
42-
Supports virtually every [model](https://ai.pydantic.dev/models/overview) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel. If your favorite model or provider is not listed, you can easily implement a [custom model](https://ai.pydantic.dev/models/overview#custom-models).
42+
Supports virtually every [model](https://ai.pydantic.dev/models/overview) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius. If your favorite model or provider is not listed, you can easily implement a [custom model](https://ai.pydantic.dev/models/overview#custom-models).
4343

4444
3. **Seamless Observability**:
4545
Tightly [integrates](https://ai.pydantic.dev/logfire) with [Pydantic Logfire](https://pydantic.dev/logfire), our general-purpose OpenTelemetry observability platform, for real-time debugging, evals-based performance monitoring, and behavior, tracing, and cost tracking. If you already have an observability platform that supports OTel, you can [use that too](https://ai.pydantic.dev/logfire#alternative-observability-backends).

docs/api/providers.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,3 +41,5 @@
4141
::: pydantic_ai.providers.ollama.OllamaProvider
4242

4343
::: pydantic_ai.providers.litellm.LiteLLMProvider
44+
45+
::: pydantic_ai.providers.nebius.NebiusProvider

docs/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ We built Pydantic AI with one simple aim: to bring that FastAPI feeling to GenAI
1414
[Pydantic Validation](https://docs.pydantic.dev/latest/) is the validation layer of the OpenAI SDK, the Google ADK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more. _Why use the derivative when you can go straight to the source?_ :smiley:
1515

1616
2. **Model-agnostic**:
17-
Supports virtually every [model](models/overview.md) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel. If your favorite model or provider is not listed, you can easily implement a [custom model](models/overview.md#custom-models).
17+
Supports virtually every [model](models/overview.md) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius. If your favorite model or provider is not listed, you can easily implement a [custom model](models/overview.md#custom-models).
1818

1919
3. **Seamless Observability**:
2020
Tightly [integrates](logfire.md) with [Pydantic Logfire](https://pydantic.dev/logfire), our general-purpose OpenTelemetry observability platform, for real-time debugging, evals-based performance monitoring, and behavior, tracing, and cost tracking. If you already have an observability platform that supports OTel, you can [use that too](logfire.md#alternative-observability-backends).

docs/models/openai.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -608,3 +608,35 @@ print(result.output)
608608
#> The capital of France is Paris.
609609
...
610610
```
611+
612+
### Nebius AI Studio
613+
614+
Go to [Nebius AI Studio](https://studio.nebius.com/) and create an API key.
615+
616+
Once you've set the `NEBIUS_API_KEY` environment variable, you can run the following:
617+
618+
```python
619+
from pydantic_ai import Agent
620+
621+
agent = Agent('nebius:Qwen/Qwen3-32B-fast')
622+
result = agent.run_sync('What is the capital of France?')
623+
print(result.output)
624+
#> The capital of France is Paris.
625+
```
626+
627+
If you need to configure the provider, you can use the [`NebiusProvider`][pydantic_ai.providers.nebius.NebiusProvider] class:
628+
629+
```python
630+
from pydantic_ai import Agent
631+
from pydantic_ai.models.openai import OpenAIChatModel
632+
from pydantic_ai.providers.nebius import NebiusProvider
633+
634+
model = OpenAIChatModel(
635+
'Qwen/Qwen3-32B-fast',
636+
provider=NebiusProvider(api_key='your-nebius-api-key'),
637+
)
638+
agent = Agent(model)
639+
result = agent.run_sync('What is the capital of France?')
640+
print(result.output)
641+
#> The capital of France is Paris.
642+
```

pydantic_ai_slim/pydantic_ai/_parts_manager.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -312,6 +312,7 @@ def handle_tool_call_part(
312312
tool_name: str,
313313
args: str | dict[str, Any] | None,
314314
tool_call_id: str | None = None,
315+
id: str | None = None,
315316
) -> ModelResponseStreamEvent:
316317
"""Immediately create or fully-overwrite a ToolCallPart with the given information.
317318
@@ -323,6 +324,7 @@ def handle_tool_call_part(
323324
tool_name: The name of the tool being invoked.
324325
args: The arguments for the tool call, either as a string, a dictionary, or None.
325326
tool_call_id: An optional string identifier for this tool call.
327+
id: An optional identifier for this tool call part.
326328
327329
Returns:
328330
ModelResponseStreamEvent: A `PartStartEvent` indicating that a new tool call part
@@ -332,6 +334,7 @@ def handle_tool_call_part(
332334
tool_name=tool_name,
333335
args=args,
334336
tool_call_id=tool_call_id or _generate_tool_call_id(),
337+
id=id,
335338
)
336339
if vendor_part_id is None:
337340
# vendor_part_id is None, so we unconditionally append a new ToolCallPart to the end of the list

pydantic_ai_slim/pydantic_ai/messages.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1052,6 +1052,13 @@ class BaseToolCallPart:
10521052
In case the tool call id is not provided by the model, Pydantic AI will generate a random one.
10531053
"""
10541054

1055+
_: KW_ONLY
1056+
1057+
id: str | None = None
1058+
"""An optional identifier of the tool call part, separate from the tool call ID.
1059+
1060+
This is used by some APIs like OpenAI Responses."""
1061+
10551062
def args_as_dict(self) -> dict[str, Any]:
10561063
"""Return the arguments as a Python dictionary.
10571064

pydantic_ai_slim/pydantic_ai/models/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -691,6 +691,7 @@ def infer_model(model: Model | KnownModelName | str) -> Model: # noqa: C901
691691
'together',
692692
'vercel',
693693
'litellm',
694+
'nebius',
694695
):
695696
from .openai import OpenAIChatModel
696697

pydantic_ai_slim/pydantic_ai/models/openai.py

Lines changed: 16 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -283,6 +283,7 @@ def __init__(
283283
'together',
284284
'vercel',
285285
'litellm',
286+
'nebius',
286287
]
287288
| Provider[AsyncOpenAI] = 'openai',
288289
profile: ModelProfileSpec | None = None,
@@ -311,6 +312,7 @@ def __init__(
311312
'together',
312313
'vercel',
313314
'litellm',
315+
'nebius',
314316
]
315317
| Provider[AsyncOpenAI] = 'openai',
316318
profile: ModelProfileSpec | None = None,
@@ -338,6 +340,7 @@ def __init__(
338340
'together',
339341
'vercel',
340342
'litellm',
343+
'nebius',
341344
]
342345
| Provider[AsyncOpenAI] = 'openai',
343346
profile: ModelProfileSpec | None = None,
@@ -898,7 +901,7 @@ def __init__(
898901
self,
899902
model_name: OpenAIModelName,
900903
*,
901-
provider: Literal['openai', 'deepseek', 'azure', 'openrouter', 'grok', 'fireworks', 'together']
904+
provider: Literal['openai', 'deepseek', 'azure', 'openrouter', 'grok', 'fireworks', 'together', 'nebius']
902905
| Provider[AsyncOpenAI] = 'openai',
903906
profile: ModelProfileSpec | None = None,
904907
settings: ModelSettings | None = None,
@@ -1004,7 +1007,12 @@ def _process_response( # noqa: C901
10041007
items.append(TextPart(content.text, id=item.id))
10051008
elif isinstance(item, responses.ResponseFunctionToolCall):
10061009
items.append(
1007-
ToolCallPart(item.name, item.arguments, tool_call_id=_combine_tool_call_ids(item.call_id, item.id))
1010+
ToolCallPart(
1011+
item.name,
1012+
item.arguments,
1013+
tool_call_id=item.call_id,
1014+
id=item.id,
1015+
)
10081016
)
10091017
elif isinstance(item, responses.ResponseCodeInterpreterToolCall):
10101018
call_part, return_part, file_parts = _map_code_interpreter_tool_call(item, self.system)
@@ -1383,6 +1391,7 @@ async def _map_messages( # noqa: C901
13831391
elif isinstance(item, ToolCallPart):
13841392
call_id = _guard_tool_call_id(t=item)
13851393
call_id, id = _split_combined_tool_call_id(call_id)
1394+
id = id or item.id
13861395

13871396
param = responses.ResponseFunctionToolCallParam(
13881397
name=item.tool_name,
@@ -1783,7 +1792,8 @@ async def _get_event_iterator(self) -> AsyncIterator[ModelResponseStreamEvent]:
17831792
vendor_part_id=chunk.item.id,
17841793
tool_name=chunk.item.name,
17851794
args=chunk.item.arguments,
1786-
tool_call_id=_combine_tool_call_ids(chunk.item.call_id, chunk.item.id),
1795+
tool_call_id=chunk.item.call_id,
1796+
id=chunk.item.id,
17871797
)
17881798
elif isinstance(chunk.item, responses.ResponseReasoningItem):
17891799
pass
@@ -2058,18 +2068,14 @@ def _map_usage(response: chat.ChatCompletion | ChatCompletionChunk | responses.R
20582068
return u
20592069

20602070

2061-
def _combine_tool_call_ids(call_id: str, id: str | None) -> str:
2062-
# When reasoning, the Responses API requires the `ResponseFunctionToolCall` to be returned with both the `call_id` and `id` fields.
2063-
# Our `ToolCallPart` has only the `call_id` field, so we combine the two fields into a single string.
2064-
return f'{call_id}|{id}' if id else call_id
2065-
2066-
20672071
def _split_combined_tool_call_id(combined_id: str) -> tuple[str, str | None]:
2072+
# When reasoning, the Responses API requires the `ResponseFunctionToolCall` to be returned with both the `call_id` and `id` fields.
2073+
# Before our `ToolCallPart` gained the `id` field alongside `tool_call_id` field, we combined the two fields into a single string stored on `tool_call_id`.
20682074
if '|' in combined_id:
20692075
call_id, id = combined_id.split('|', 1)
20702076
return call_id, id
20712077
else:
2072-
return combined_id, None # pragma: no cover
2078+
return combined_id, None
20732079

20742080

20752081
def _map_code_interpreter_tool_call(

pydantic_ai_slim/pydantic_ai/providers/__init__.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -142,6 +142,10 @@ def infer_provider_class(provider: str) -> type[Provider[Any]]: # noqa: C901
142142
from .litellm import LiteLLMProvider
143143

144144
return LiteLLMProvider
145+
elif provider == 'nebius':
146+
from .nebius import NebiusProvider
147+
148+
return NebiusProvider
145149
else: # pragma: no cover
146150
raise ValueError(f'Unknown provider: {provider}')
147151

tests/providers/test_provider_names.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@
2828
from pydantic_ai.providers.litellm import LiteLLMProvider
2929
from pydantic_ai.providers.mistral import MistralProvider
3030
from pydantic_ai.providers.moonshotai import MoonshotAIProvider
31+
from pydantic_ai.providers.nebius import NebiusProvider
3132
from pydantic_ai.providers.ollama import OllamaProvider
3233
from pydantic_ai.providers.openai import OpenAIProvider
3334
from pydantic_ai.providers.openrouter import OpenRouterProvider
@@ -54,6 +55,7 @@
5455
('github', GitHubProvider, 'GITHUB_API_KEY'),
5556
('ollama', OllamaProvider, 'OLLAMA_BASE_URL'),
5657
('litellm', LiteLLMProvider, None),
58+
('nebius', NebiusProvider, 'NEBIUS_API_KEY'),
5759
]
5860

5961
if not imports_successful():

0 commit comments

Comments
 (0)