Skip to content

Commit 59a7c70

Browse files
eliastoDouweM
andauthored
Add OVHcloud AI Endpoints provider (#3188)
Co-authored-by: Douwe Maan <[email protected]>
1 parent c317d5e commit 59a7c70

File tree

12 files changed

+257
-3
lines changed

12 files changed

+257
-3
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ We built Pydantic AI with one simple aim: to bring that FastAPI feeling to GenAI
3939
[Pydantic Validation](https://docs.pydantic.dev/latest/) is the validation layer of the OpenAI SDK, the Google ADK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more. _Why use the derivative when you can go straight to the source?_ :smiley:
4040

4141
2. **Model-agnostic**:
42-
Supports virtually every [model](https://ai.pydantic.dev/models/overview) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius. If your favorite model or provider is not listed, you can easily implement a [custom model](https://ai.pydantic.dev/models/overview#custom-models).
42+
Supports virtually every [model](https://ai.pydantic.dev/models/overview) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius, OVHcloud. If your favorite model or provider is not listed, you can easily implement a [custom model](https://ai.pydantic.dev/models/overview#custom-models).
4343

4444
3. **Seamless Observability**:
4545
Tightly [integrates](https://ai.pydantic.dev/logfire) with [Pydantic Logfire](https://pydantic.dev/logfire), our general-purpose OpenTelemetry observability platform, for real-time debugging, evals-based performance monitoring, and behavior, tracing, and cost tracking. If you already have an observability platform that supports OTel, you can [use that too](https://ai.pydantic.dev/logfire#alternative-observability-backends).

docs/api/providers.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,3 +43,5 @@
4343
::: pydantic_ai.providers.litellm.LiteLLMProvider
4444

4545
::: pydantic_ai.providers.nebius.NebiusProvider
46+
47+
::: pydantic_ai.providers.ovhcloud.OVHcloudProvider

docs/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ We built Pydantic AI with one simple aim: to bring that FastAPI feeling to GenAI
1414
[Pydantic Validation](https://docs.pydantic.dev/latest/) is the validation layer of the OpenAI SDK, the Google ADK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more. _Why use the derivative when you can go straight to the source?_ :smiley:
1515

1616
2. **Model-agnostic**:
17-
Supports virtually every [model](models/overview.md) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius. If your favorite model or provider is not listed, you can easily implement a [custom model](models/overview.md#custom-models).
17+
Supports virtually every [model](models/overview.md) and provider: OpenAI, Anthropic, Gemini, DeepSeek, Grok, Cohere, Mistral, and Perplexity; Azure AI Foundry, Amazon Bedrock, Google Vertex AI, Ollama, LiteLLM, Groq, OpenRouter, Together AI, Fireworks AI, Cerebras, Hugging Face, GitHub, Heroku, Vercel, Nebius, OVHcloud. If your favorite model or provider is not listed, you can easily implement a [custom model](models/overview.md#custom-models).
1818

1919
3. **Seamless Observability**:
2020
Tightly [integrates](logfire.md) with [Pydantic Logfire](https://pydantic.dev/logfire), our general-purpose OpenTelemetry observability platform, for real-time debugging, evals-based performance monitoring, and behavior, tracing, and cost tracking. If you already have an observability platform that supports OTel, you can [use that too](logfire.md#alternative-observability-backends).

docs/models/openai.md

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -724,3 +724,37 @@ result = agent.run_sync('What is the capital of France?')
724724
print(result.output)
725725
#> The capital of France is Paris.
726726
```
727+
728+
### OVHcloud AI Endpoints
729+
730+
To use OVHcloud AI Endpoints, you need to create a new API key. To do so, go to the [OVHcloud manager](https://ovh.com/manager), then in Public Cloud > AI Endpoints > API keys. Click on `Create a new API key` and copy your new key.
731+
732+
You can explore the [catalog](https://endpoints.ai.cloud.ovh.net/catalog) to find which models are available.
733+
734+
You can set the `OVHCLOUD_API_KEY` environment variable and use [`OVHcloudProvider`][pydantic_ai.providers.ovhcloud.OVHcloudProvider] by name:
735+
736+
```python
737+
from pydantic_ai import Agent
738+
739+
agent = Agent('ovhcloud:gpt-oss-120b')
740+
result = agent.run_sync('What is the capital of France?')
741+
print(result.output)
742+
#> The capital of France is Paris.
743+
```
744+
745+
If you need to configure the provider, you can use the [`OVHcloudProvider`][pydantic_ai.providers.ovhcloud.OVHcloudProvider] class:
746+
747+
```python
748+
from pydantic_ai import Agent
749+
from pydantic_ai.models.openai import OpenAIChatModel
750+
from pydantic_ai.providers.ovhcloud import OVHcloudProvider
751+
752+
model = OpenAIChatModel(
753+
'gpt-oss-120b',
754+
provider=OVHcloudProvider(api_key='your-api-key'),
755+
)
756+
agent = Agent(model)
757+
result = agent.run_sync('What is the capital of France?')
758+
print(result.output)
759+
#> The capital of France is Paris.
760+
```

docs/models/overview.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@ In addition, many providers are compatible with the OpenAI API, and can be used
2929
- [Cerebras](openai.md#cerebras)
3030
- [LiteLLM](openai.md#litellm)
3131
- [Nebius AI Studio](openai.md#nebius-ai-studio)
32+
- [OVHcloud AI Endpoints](openai.md#ovhcloud-ai-endpoints)
3233

3334
Pydantic AI also comes with [`TestModel`](../api/models/test.md) and [`FunctionModel`](../api/models/function.md)
3435
for testing and development.

pydantic_ai_slim/pydantic_ai/models/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -688,6 +688,7 @@ def infer_model(model: Model | KnownModelName | str) -> Model: # noqa: C901
688688
'vercel',
689689
'litellm',
690690
'nebius',
691+
'ovhcloud',
691692
):
692693
from .openai import OpenAIChatModel
693694

pydantic_ai_slim/pydantic_ai/models/openai.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -285,6 +285,7 @@ def __init__(
285285
'vercel',
286286
'litellm',
287287
'nebius',
288+
'ovhcloud',
288289
]
289290
| Provider[AsyncOpenAI] = 'openai',
290291
profile: ModelProfileSpec | None = None,
@@ -314,6 +315,7 @@ def __init__(
314315
'vercel',
315316
'litellm',
316317
'nebius',
318+
'ovhcloud',
317319
]
318320
| Provider[AsyncOpenAI] = 'openai',
319321
profile: ModelProfileSpec | None = None,
@@ -342,6 +344,7 @@ def __init__(
342344
'vercel',
343345
'litellm',
344346
'nebius',
347+
'ovhcloud',
345348
]
346349
| Provider[AsyncOpenAI] = 'openai',
347350
profile: ModelProfileSpec | None = None,
@@ -903,7 +906,9 @@ def __init__(
903906
self,
904907
model_name: OpenAIModelName,
905908
*,
906-
provider: Literal['openai', 'deepseek', 'azure', 'openrouter', 'grok', 'fireworks', 'together', 'nebius']
909+
provider: Literal[
910+
'openai', 'deepseek', 'azure', 'openrouter', 'grok', 'fireworks', 'together', 'nebius', 'ovhcloud'
911+
]
907912
| Provider[AsyncOpenAI] = 'openai',
908913
profile: ModelProfileSpec | None = None,
909914
settings: ModelSettings | None = None,

pydantic_ai_slim/pydantic_ai/providers/__init__.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -146,6 +146,10 @@ def infer_provider_class(provider: str) -> type[Provider[Any]]: # noqa: C901
146146
from .nebius import NebiusProvider
147147

148148
return NebiusProvider
149+
elif provider == 'ovhcloud':
150+
from .ovhcloud import OVHcloudProvider
151+
152+
return OVHcloudProvider
149153
else: # pragma: no cover
150154
raise ValueError(f'Unknown provider: {provider}')
151155

Lines changed: 95 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
from __future__ import annotations as _annotations
2+
3+
import os
4+
from typing import overload
5+
6+
import httpx
7+
8+
from pydantic_ai import ModelProfile
9+
from pydantic_ai.exceptions import UserError
10+
from pydantic_ai.models import cached_async_http_client
11+
from pydantic_ai.profiles.deepseek import deepseek_model_profile
12+
from pydantic_ai.profiles.harmony import harmony_model_profile
13+
from pydantic_ai.profiles.meta import meta_model_profile
14+
from pydantic_ai.profiles.mistral import mistral_model_profile
15+
from pydantic_ai.profiles.openai import OpenAIJsonSchemaTransformer, OpenAIModelProfile
16+
from pydantic_ai.profiles.qwen import qwen_model_profile
17+
from pydantic_ai.providers import Provider
18+
19+
try:
20+
from openai import AsyncOpenAI
21+
except ImportError as _import_error: # pragma: no cover
22+
raise ImportError(
23+
'Please install the `openai` package to use OVHcloud AI Endpoints provider.'
24+
'You can use the `openai` optional group — `pip install "pydantic-ai-slim[openai]"`'
25+
) from _import_error
26+
27+
28+
class OVHcloudProvider(Provider[AsyncOpenAI]):
29+
"""Provider for OVHcloud AI Endpoints."""
30+
31+
@property
32+
def name(self) -> str:
33+
return 'ovhcloud'
34+
35+
@property
36+
def base_url(self) -> str:
37+
return 'https://oai.endpoints.kepler.ai.cloud.ovh.net/v1'
38+
39+
@property
40+
def client(self) -> AsyncOpenAI:
41+
return self._client
42+
43+
def model_profile(self, model_name: str) -> ModelProfile | None:
44+
model_name = model_name.lower()
45+
46+
prefix_to_profile = {
47+
'llama': meta_model_profile,
48+
'meta-': meta_model_profile,
49+
'deepseek': deepseek_model_profile,
50+
'mistral': mistral_model_profile,
51+
'gpt': harmony_model_profile,
52+
'qwen': qwen_model_profile,
53+
}
54+
55+
profile = None
56+
for prefix, profile_func in prefix_to_profile.items():
57+
if model_name.startswith(prefix):
58+
profile = profile_func(model_name)
59+
60+
# As the OVHcloud AI Endpoints API is OpenAI-compatible, let's assume we also need OpenAIJsonSchemaTransformer.
61+
return OpenAIModelProfile(json_schema_transformer=OpenAIJsonSchemaTransformer).update(profile)
62+
63+
@overload
64+
def __init__(self) -> None: ...
65+
66+
@overload
67+
def __init__(self, *, api_key: str) -> None: ...
68+
69+
@overload
70+
def __init__(self, *, api_key: str, http_client: httpx.AsyncClient) -> None: ...
71+
72+
@overload
73+
def __init__(self, *, openai_client: AsyncOpenAI | None = None) -> None: ...
74+
75+
def __init__(
76+
self,
77+
*,
78+
api_key: str | None = None,
79+
openai_client: AsyncOpenAI | None = None,
80+
http_client: httpx.AsyncClient | None = None,
81+
) -> None:
82+
api_key = api_key or os.getenv('OVHCLOUD_API_KEY')
83+
if not api_key and openai_client is None:
84+
raise UserError(
85+
'Set the `OVHCLOUD_API_KEY` environment variable or pass it via '
86+
'`OVHcloudProvider(api_key=...)` to use OVHcloud AI Endpoints provider.'
87+
)
88+
89+
if openai_client is not None:
90+
self._client = openai_client
91+
elif http_client is not None:
92+
self._client = AsyncOpenAI(base_url=self.base_url, api_key=api_key, http_client=http_client)
93+
else:
94+
http_client = cached_async_http_client(provider='ovhcloud')
95+
self._client = AsyncOpenAI(base_url=self.base_url, api_key=api_key, http_client=http_client)

tests/providers/test_ovhcloud.py

Lines changed: 109 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,109 @@
1+
import re
2+
3+
import httpx
4+
import pytest
5+
from pytest_mock import MockerFixture
6+
7+
from pydantic_ai._json_schema import InlineDefsJsonSchemaTransformer
8+
from pydantic_ai.exceptions import UserError
9+
from pydantic_ai.profiles.deepseek import deepseek_model_profile
10+
from pydantic_ai.profiles.harmony import harmony_model_profile
11+
from pydantic_ai.profiles.meta import meta_model_profile
12+
from pydantic_ai.profiles.mistral import mistral_model_profile
13+
from pydantic_ai.profiles.openai import OpenAIJsonSchemaTransformer
14+
from pydantic_ai.profiles.qwen import qwen_model_profile
15+
16+
from ..conftest import TestEnv, try_import
17+
18+
with try_import() as imports_successful:
19+
import openai
20+
21+
from pydantic_ai.providers.ovhcloud import OVHcloudProvider
22+
23+
24+
pytestmark = [
25+
pytest.mark.skipif(not imports_successful(), reason='openai not installed'),
26+
pytest.mark.vcr,
27+
pytest.mark.anyio,
28+
]
29+
30+
31+
def test_ovhcloud_provider():
32+
provider = OVHcloudProvider(api_key='your-api-key')
33+
assert provider.name == 'ovhcloud'
34+
assert provider.base_url == 'https://oai.endpoints.kepler.ai.cloud.ovh.net/v1'
35+
assert isinstance(provider.client, openai.AsyncOpenAI)
36+
assert provider.client.api_key == 'your-api-key'
37+
38+
39+
def test_ovhcloud_provider_need_api_key(env: TestEnv) -> None:
40+
env.remove('OVHCLOUD_API_KEY')
41+
with pytest.raises(
42+
UserError,
43+
match=re.escape(
44+
'Set the `OVHCLOUD_API_KEY` environment variable or pass it via '
45+
'`OVHcloudProvider(api_key=...)` to use OVHcloud AI Endpoints provider.'
46+
),
47+
):
48+
OVHcloudProvider()
49+
50+
51+
def test_ovhcloud_pass_openai_client() -> None:
52+
openai_client = openai.AsyncOpenAI(api_key='your-api-key')
53+
provider = OVHcloudProvider(openai_client=openai_client)
54+
assert provider.client == openai_client
55+
56+
57+
def test_ovhcloud_pass_http_client():
58+
http_client = httpx.AsyncClient()
59+
provider = OVHcloudProvider(api_key='your-api-key', http_client=http_client)
60+
assert isinstance(provider.client, openai.AsyncOpenAI)
61+
assert provider.client.api_key == 'your-api-key'
62+
63+
64+
def test_ovhcloud_model_profile(mocker: MockerFixture):
65+
provider = OVHcloudProvider(api_key='your-api-key')
66+
67+
ns = 'pydantic_ai.providers.ovhcloud'
68+
69+
# Mock all profile functions
70+
deepseek_mock = mocker.patch(f'{ns}.deepseek_model_profile', wraps=deepseek_model_profile)
71+
harmony_mock = mocker.patch(f'{ns}.harmony_model_profile', wraps=harmony_model_profile)
72+
meta_mock = mocker.patch(f'{ns}.meta_model_profile', wraps=meta_model_profile)
73+
mistral_mock = mocker.patch(f'{ns}.mistral_model_profile', wraps=mistral_model_profile)
74+
qwen_mock = mocker.patch(f'{ns}.qwen_model_profile', wraps=qwen_model_profile)
75+
76+
# Test deepseek provider
77+
profile = provider.model_profile('DeepSeek-R1-Distill-Llama-70B')
78+
deepseek_mock.assert_called_with('deepseek-r1-distill-llama-70b')
79+
assert profile is not None
80+
assert profile.json_schema_transformer == OpenAIJsonSchemaTransformer
81+
82+
# Test harmony (for openai gpt-oss) provider
83+
profile = provider.model_profile('gpt-oss-120b')
84+
harmony_mock.assert_called_with('gpt-oss-120b')
85+
assert profile is not None
86+
assert profile.json_schema_transformer == OpenAIJsonSchemaTransformer
87+
88+
# Test meta provider
89+
meta_profile = provider.model_profile('Llama-3.3-70B-Instruct')
90+
meta_mock.assert_called_with('llama-3.3-70b-instruct')
91+
assert meta_profile is not None
92+
assert meta_profile.json_schema_transformer == InlineDefsJsonSchemaTransformer
93+
94+
# Test mistral provider
95+
profile = provider.model_profile('Mistral-Small-3.2-24B-Instruct-2506')
96+
mistral_mock.assert_called_with('mistral-small-3.2-24b-instruct-2506')
97+
assert profile is not None
98+
assert profile.json_schema_transformer == OpenAIJsonSchemaTransformer
99+
100+
# Test qwen provider
101+
qwen_profile = provider.model_profile('Qwen3-32B')
102+
qwen_mock.assert_called_with('qwen3-32b')
103+
assert qwen_profile is not None
104+
assert qwen_profile.json_schema_transformer == InlineDefsJsonSchemaTransformer
105+
106+
# Test unknown provider
107+
unknown_profile = provider.model_profile('unknown-model')
108+
assert unknown_profile is not None
109+
assert unknown_profile.json_schema_transformer == OpenAIJsonSchemaTransformer

0 commit comments

Comments
 (0)