You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/public_cloud/ai_machine_learning/endpoints_guide_07_virtual_models/guide.en-gb.md
+6-8Lines changed: 6 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: AI Endpoints - Using Virtual Models
3
3
excerpt: Learn how to use OVHcloud AI Endpoints Virtual Models
4
-
updated: 2025-08-14
4
+
updated: 2025-08-18
5
5
---
6
6
7
7
> [!primary]
@@ -11,13 +11,13 @@ updated: 2025-08-14
11
11
12
12
## Introduction
13
13
14
-
Choosing the right Large Language Model (LLM) is not always straightforward. Models vary in strengths, performance, cost, and licensing, and new ones appear regularly—often outperforming previous options. This rapid evolution makes it essential to match your choice to your specific needs, while staying ready to adapt as better models emerge.
14
+
Choosing the right Large Language Model (LLM) is not always straightforward. Models vary in strengths, performance, cost, and licensing, and new ones appear regularly, often outperforming previous options. This rapid evolution makes it essential to match your choice to your specific needs, while staying ready to adapt as better models emerge.
15
15
16
-
To make this easier, we developed a system of virtual models where instead of requesting a hard-coded model, you specify the expected specifications of the model you need (size, price, etc.) and we resolve it to the currently best matching model in our catalog. In this guide, we'll see the different capabilities of this feature and how to use it with your OpenAI compatible code.
16
+
To make this easier, we developed a system of virtual models. Instead of requesting a hard-coded model, you specify the expected specifications of the model you need (size, price, etc.) andthe system automatically maps your request to the best available match in our catalog. In this guide, you will learn about the different capabilities of this feature and how to use it with your OpenAI compatible code.
17
17
18
18
## Requirements
19
19
20
-
The examples provided during this guide can be used with one of the following environments:
20
+
The examples provided in this guide can be used with one of the following environments:
21
21
22
22
> [!tabs]
23
23
> **Python**
@@ -42,7 +42,7 @@ Follow the instructions in the [AI Endpoints - Getting Started](/pages/public_cl
42
42
43
43
## Model DSL
44
44
45
-
When you request a LLM generation through our unified endpoint, you can provide in the OpenAI-compliant `model` field a model DSL query instead of a hardcoded model name.
45
+
When you request an LLM generation through our unified endpoint, you can provide in the OpenAI-compliant `model` field a model DSL query instead of a hardcoded model name.
46
46
47
47
These queries are divided into three parts: tag, ranker, and condition:
48
48
@@ -129,12 +129,10 @@ The following code samples provide a simple example on how to query our API with
129
129
## Conclusion
130
130
131
131
Using OVHcloud AI Endpoints with virtual models allows you to stay up to date with the best available LLMs without having to change your code whenever a new release arrives.
132
-
By defining your requirements through tags, rankers, and conditions, you can ensure your application always runs on the most suitable model for your needs—whether you prioritize speed, cost, size, or capabilities. This flexibility makes it easier to build, maintain, and scale AI-powered solutions over time.
132
+
By defining your requirements through tags, rankers, and conditions, you can ensure your application always runs on the most suitable model for your needs, whether you prioritize speed, cost, size, or capabilities. This flexibility makes it easier to build, maintain, and scale AI-powered solutions over time.
133
133
134
134
## Go further
135
135
136
-
Browse the full [AI Endpoints documentation](/products/public-cloud-ai-and-machine-learning-ai-endpoints) to further understand the main concepts and get started.
137
-
138
136
To discover how to build complete and powerful applications using AI Endpoints, explore our dedicated [AI Endpoints guides](/products/public-cloud-ai-and-machine-learning-ai-endpoints).
139
137
140
138
If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project.
Copy file name to clipboardExpand all lines: pages/public_cloud/ai_machine_learning/endpoints_guide_07_virtual_models/guide.fr-fr.md
+8-9Lines changed: 8 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: AI Endpoints - Modèles virtuels
3
-
excerpt: Découvrez comment utiliser les modèles virtuels d'AI Endpoints
4
-
updated: 2025-08-14
3
+
excerpt: "Découvrez comment utiliser les modèles virtuels d'AI Endpoints"
4
+
updated: 2025-08-18
5
5
---
6
6
7
7
> [!primary]
@@ -11,13 +11,13 @@ updated: 2025-08-14
11
11
12
12
## Introduction
13
13
14
-
Choosing the right Large Language Model (LLM) is not always straightforward. Models vary in strengths, performance, cost, and licensing, and new ones appear regularly—often outperforming previous options. This rapid evolution makes it essential to match your choice to your specific needs, while staying ready to adapt as better models emerge.
14
+
Choosing the right Large Language Model (LLM) is not always straightforward. Models vary in strengths, performance, cost, and licensing, and new ones appear regularly, often outperforming previous options. This rapid evolution makes it essential to match your choice to your specific needs, while staying ready to adapt as better models emerge.
15
15
16
-
To make this easier, we developed a system of virtual models where instead of requesting a hard-coded model, you specify the expected specifications of the model you need (size, price, etc.) and we resolve it to the currently best matching model in our catalog. In this guide, we'll see the different capabilities of this feature and how to use it with your OpenAI compatible code.
16
+
To make this easier, we developed a system of virtual models. Instead of requesting a hard-coded model, you specify the expected specifications of the model you need (size, price, etc.) andthe system automatically maps your request to the best available match in our catalog. In this guide, you will learn about the different capabilities of this feature and how to use it with your OpenAI compatible code.
17
17
18
18
## Requirements
19
19
20
-
The examples provided during this guide can be used with one of the following environments:
20
+
The examples provided in this guide can be used with one of the following environments:
21
21
22
22
> [!tabs]
23
23
> **Python**
@@ -42,7 +42,7 @@ Follow the instructions in the [AI Endpoints - Getting Started](/pages/public_cl
42
42
43
43
## Model DSL
44
44
45
-
When you request a LLM generation through our unified endpoint, you can provide in the OpenAI-compliant `model` field a model DSL query instead of a hardcoded model name.
45
+
When you request an LLM generation through our unified endpoint, you can provide in the OpenAI-compliant `model` field a model DSL query instead of a hardcoded model name.
46
46
47
47
These queries are divided into three parts: tag, ranker, and condition:
48
48
@@ -129,12 +129,10 @@ The following code samples provide a simple example on how to query our API with
129
129
## Conclusion
130
130
131
131
Using OVHcloud AI Endpoints with virtual models allows you to stay up to date with the best available LLMs without having to change your code whenever a new release arrives.
132
-
By defining your requirements through tags, rankers, and conditions, you can ensure your application always runs on the most suitable model for your needs—whether you prioritize speed, cost, size, or capabilities. This flexibility makes it easier to build, maintain, and scale AI-powered solutions over time.
132
+
By defining your requirements through tags, rankers, and conditions, you can ensure your application always runs on the most suitable model for your needs, whether you prioritize speed, cost, size, or capabilities. This flexibility makes it easier to build, maintain, and scale AI-powered solutions over time.
133
133
134
134
## Go further
135
135
136
-
Browse the full [AI Endpoints documentation](/products/public-cloud-ai-and-machine-learning-ai-endpoints) to further understand the main concepts and get started.
137
-
138
136
To discover how to build complete and powerful applications using AI Endpoints, explore our dedicated [AI Endpoints guides](/products/public-cloud-ai-and-machine-learning-ai-endpoints).
139
137
140
138
If you need training or technical assistance to implement our solutions, contact your sales representative or click on [this link](/links/professional-services) to get a quote and ask our Professional Services experts for a custom analysis of your project.
@@ -144,3 +142,4 @@ If you need training or technical assistance to implement our solutions, contact
144
142
Please send us your questions, feedback and suggestions to improve the service:
145
143
146
144
- On the OVHcloud [Discord server](https://discord.gg/ovhcloud).
0 commit comments