Skip to content

Commit 189fcc1

Browse files
authored
1st proofread guide.fr-fr.md
1 parent c9abdd3 commit 189fcc1

File tree

1 file changed

+34
-20
lines changed
  • pages/public_cloud/ai_machine_learning/endpoints_guide_05_structured_output

1 file changed

+34
-20
lines changed

pages/public_cloud/ai_machine_learning/endpoints_guide_05_structured_output/guide.fr-fr.md

Lines changed: 34 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: AI Endpoints - Sorties structurées
33
excerpt: Découvrez comment utiliser les sorties structurées avec OVHcloud AI Endpoints
4-
updated: 2025-04-28
4+
updated: 2025-08-05
55
---
66

77
> [!primary]
@@ -15,15 +15,17 @@ updated: 2025-04-28
1515

1616
**Structured Output** is a powerful feature that allows you to enforce specific formats for the responses from AI models. By using the `response_format` parameter in your API calls, you can define how you want the output to be structured, ensuring consistency and ease of integration with your applications.
1717
This is particularly useful when you need the AI model to return data in a specific JSON format.
18-
The [JSON schema](https://json-schema.org/) specification can be used to describe what data structure should the output adhere to, and the AI model will generate responses that match it.
18+
The [JSON schema](https://json-schema.org/) specification can be used to describe what data structure should the output adhere to, and the AI model will generate responses that match it.
1919
This feature allows for seamless integration of AI-generated data into your applications, enabling you to build robust and consistent workflows.
2020

2121
## Objective
2222

23-
This documentation provides an overview on how to use structured outputs with the various AI models offered on [AI Endpoints](https://endpoints.ai.cloud.ovh.net/).
24-
The examples provided in this guide will be using the [Llama 3.3 70b model](https://endpoints.ai.cloud.ovh.net/models/c968b503-27fa-451d-b59d-1b0ff91d304d)
23+
This documentation provides an overview on how to use structured outputs with the various AI models offered on [AI Endpoints](https://endpoints.ai.cloud.ovh.net/).
24+
25+
The examples provided in this guide will be using the [Llama 3.3 70b model](https://endpoints.ai.cloud.ovh.net/models/c968b503-27fa-451d-b59d-1b0ff91d304d).
2526

2627
Visit our [Catalog](https://endpoints.ai.cloud.ovh.net/catalog) to find out which models are compatible with Structured Output.
28+
2729
The output formats managed by each model are defined in the Response Format section:
2830

2931
![Model Specs](images/model_specs.png)
@@ -36,6 +38,7 @@ The examples provided during this guide can be used with one of the following en
3638
> **Python**
3739
>>
3840
>> A [Python](https://www.python.org/) environment with the [openai client](https://pypi.org/project/openai/) and the pydantic library installed.
41+
>>
3942
>> ```sh
4043
>> pip install openai pydantic
4144
>> ```
@@ -44,6 +47,7 @@ The examples provided during this guide can be used with one of the following en
4447
>>
4548
>> A [Node.js](https://nodejs.org/en) environment with the [request](https://www.npmjs.com/package/request) library.
4649
>> Request can be installed using [NPM](https://www.npmjs.com/):
50+
>>
4751
>> ```sh
4852
>> npm install request
4953
>> ```
@@ -55,25 +59,26 @@ The examples provided during this guide can be used with one of the following en
5559
5660
### Authentication & rate limiting
5761
58-
Most of the examples provided in this guide are using the anonymous authentication which makes it simpler to use but may cause rate limiting issues.
62+
Most of the examples provided in this guide use anonymous authentication, which makes it simpler to use but may cause rate limiting issues.
5963
If you wish to enable authentication using your own token, simply specify your API key within the requests.
60-
Follow the following instructions in the [AI Endpoints - Getting Started](/pages/public_cloud/ai_machine_learning/endpoints_guide_01_getting_started) for more information on authentication.
64+
65+
Follow the instructions in the [AI Endpoints - Getting Started](/pages/public_cloud/ai_machine_learning/endpoints_guide_01_getting_started) guide for more information on authentication.
6166
6267
## Instructions
6368
6469
The `response_format` parameter of the Chat Completion API allows us to enable and configure the Structured Output features.
70+
6571
Models that support structured output can manage the three following modes:
6672
6773
- `{"type": "text"}`
6874
The default textual format. This is the same as specifying no `response_format`.
6975
7076
- `{"type": "json_object"}`
71-
The JSON object format is a legacy format that was introduced with the first iteration of Structured Outputs.
72-
This mode is non-deterministic and allows the model to output a JSON object without strict validation.
77+
The JSON object format is a legacy format that was introduced with the first iteration of Structured Outputs. This mode is non-deterministic and allows the model to output a JSON object without strict validation.
7378
7479
- `{"type": "json_schema", "json_schema": .. }`
75-
[JSON schema](https://json-schema.org/) is a very powerful tool used to specify and validate a JSON data structure.
76-
This latest kind of response_format allows us to enforce custom output formats in LLM outputs using this specification and ensure consistency and interoperability with a variety of platforms and applications.
80+
[JSON schema](https://json-schema.org/) is a very powerful tool used to specify and validate a JSON data structure. This latest kind of `response_format` allows us to enforce custom output formats in LLM outputs using this specification and ensure consistency and interoperability with a variety of platforms and applications.
81+
7782
When using the JSON schema mode, outputs are deterministic and will always adhere to the schema specified.
7883
7984
We recommend using JSON schema over JSON object whenever possible.
@@ -131,6 +136,7 @@ The following code samples provide a simple example on how to specify a JSON sch
131136
>> ```
132137
>>
133138
>> Output:
139+
>>
134140
>> ```sh
135141
>> JSON schema: {'$defs': {'Language': {'properties': {'name': {'title': 'Name', 'type': 'string'}, 'website': {'title': 'Website', 'type': 'string'}, 'ranking': {'title': 'Ranking', 'type': 'integer'}}, 'required': ['name', 'website', 'ranking'], 'title': 'Language', 'type': 'object'}}, 'properties': {'languages': {'items': {'$ref': '#/$defs/Language'}, 'title': 'Languages', 'type': 'array'}}, 'required': ['languages'], 'title': 'LanguageRankings', 'type': 'object'}
136142
>> JavaScript is the n°1 language (https://www.javascript.com/)
@@ -143,6 +149,7 @@ The following code samples provide a simple example on how to specify a JSON sch
143149
> **Curl**
144150
>>
145151
>> Input query:
152+
>>
146153
>> ```sh
147154
>> curl -X POST "https://llama-3-3-70b-instruct.endpoints.kepler.ai.cloud.ovh.net/api/openai_compat/v1/chat/completions" \
148155
>> -H 'accept: application/json'\
@@ -194,6 +201,7 @@ The following code samples provide a simple example on how to specify a JSON sch
194201
>> ```
195202
>>
196203
>> Output response:
204+
>>
197205
>> ```sh
198206
>> {"id":"chatcmpl-9276e3e305e04c73bd05224abcb7532b","object":"chat.completion","created":1750772047,"model":"Meta-Llama-3_3-70B-Instruct","choices":[{"index":0,"message":{"role":"assistant","content":"{\"languages\": [\n {\"name\": \"JavaScript\", \"ranking\": 1, \"website\": \"https://www.javascript.com/\"},\n {\"name\": \"Python\", \"ranking\": 2, \"website\": \"https://www.python.org/\"},\n {\"name\": \"Java\", \"ranking\": 3, \"website\": \"https://www.java.com/\"}\n]}"},"finish_reason":"stop","logprobs":null}],"usage":{"prompt_tokens":65,"completion_tokens":80,"total_tokens":145}}
199207
>> ```
@@ -283,6 +291,7 @@ The following code samples provide a simple example on how to specify a JSON sch
283291
>> ```
284292
>>
285293
>> Output:
294+
>>
286295
>> ```sh
287296
>> {"languages": [
288297
>> {"name": "JavaScript", "ranking": 1, "website": "https://www.javascript.com/"},
@@ -295,11 +304,11 @@ The following code samples provide a simple example on how to specify a JSON sch
295304
>> ```
296305
>>
297306
>> This example shows us how to use the JSON schema response format with Javascript.
307+
>>
298308
299309
### JSON object
300310
301-
The following code samples provide a simple example on how to use the legacy JSON object mode, using the `response_format` parameter.
302-
Note that when using the JSON object mode, we cannot explicitly specify the schema of the output.
311+
The following code samples provide a simple example on how to use the legacy JSON object mode, using the `response_format` parameter. Note that when using the JSON object mode, we cannot explicitly specify the schema of the output.
303312
304313
> [!tabs]
305314
> **Python**
@@ -338,6 +347,7 @@ Note that when using the JSON object mode, we cannot explicitly specify the sche
338347
>> ```
339348
>>
340349
>> Output:
350+
>>
341351
>> ```sh
342352
>> {
343353
>> "rank": [
@@ -363,6 +373,7 @@ Note that when using the JSON object mode, we cannot explicitly specify the sche
363373
> **Curl**
364374
>>
365375
>> Input query:
376+
>>
366377
>> ```sh
367378
>> curl -X POST "https://llama-3-3-70b-instruct.endpoints.kepler.ai.cloud.ovh.net/api/openai_compat/v1/chat/completions" \
368379
>> -H 'accept: application/json' \
@@ -382,6 +393,7 @@ Note that when using the JSON object mode, we cannot explicitly specify the sche
382393
>> ```
383394
>>
384395
>> Output:
396+
>>
385397
>> ```sh
386398
>> {"id":"chatcmpl-dfdbf074ab864199bac48ec929179fed","object":"chat.completion","created":1750773314,"model":"Meta-Llama-3_3-70B-Instruct","choices":[{"index":0,"message":{"role":"assistant","content":"{\"rank\": [\n {\"position\": 1, \"language\": \"JavaScript\", \"popularity\": \"94.5%\"},\n {\"position\": 2, \"language\": \"HTML/CSS\", \"popularity\": \"93.2%\"},\n {\"position\": 3, \"language\": \"Python\", \"popularity\": \"87.3%\"}\n]}"},"finish_reason":"stop","logprobs":null}],"usage":{"prompt_tokens":65,"completion_tokens":77,"total_tokens":142}}%
387399
>> ```
@@ -430,6 +442,7 @@ Note that when using the JSON object mode, we cannot explicitly specify the sche
430442
>> ```
431443
>>
432444
>> Output:
445+
>>
433446
>> ```sh
434447
>> {
435448
>> rank: [
@@ -439,16 +452,18 @@ Note that when using the JSON object mode, we cannot explicitly specify the sche
439452
>> ]
440453
>> }
441454
>> ```
455+
>>
442456
443457
### Tips and best practices
444458
445459
This section contains additional tips that may improve the performance of Structured Output queries.
446460
447461
#### Streaming
448462
449-
All kinds of response_format are compatible with streaming. To enable streaming, simply use `"streaming": true` in your request's body and process the stream accordingly.
463+
All kinds of `response_format` are compatible with streaming. To enable streaming, simply use `"streaming": true` in your request's body and process the stream accordingly.
450464
451465
Example with python:
466+
452467
```python
453468
from pydantic import BaseModel
454469
import openai
@@ -505,6 +520,7 @@ for language in language_rankings.languages:
505520
```
506521
507522
Streamed output response:
523+
508524
```sh
509525
{"languages": [
510526
{"name": "JavaScript", "ranking": 1, "website": "https://www.javascript.com/"},
@@ -522,22 +538,21 @@ Some considerations about the JSON schema definition:
522538
523539
- Structured output currently supports a subset of the [JSON schema specification](https://json-schema.org/specification). Some features may not be compatible.
524540
- The models will generate the output following alphabetical order of the JSON schema keys. It may be useful to rename your fields to enforce a specific order during generation.
525-
- To avoid divergence, we recommend setting [additional properties](https://json-schema.org/understanding-json-schema/reference/object#additionalproperties) to `false` and explicity setting the [required fields](https://json-schema.org/learn/getting-started-step-by-step#define-required-properties)
541+
- To avoid divergence, we recommend setting [additional properties](https://json-schema.org/understanding-json-schema/reference/object#additionalproperties) to `false` and explicity setting the [required fields](https://json-schema.org/learn/getting-started-step-by-step#define-required-properties).
526542
527543
Don't hesitate to experiment with different variations of your JSON schemas to reach the best performance!
528544
529545
#### Prompting & additional parameters
530546
531547
Some additional considerations regarding prompts and model parameters:
532548
533-
- Even though the response_format can be used to enable structured outputs, models can generally perform better when asked to produce json outputs within the prompt (`messages` field).
549+
- Even though the `response_format` can be used to enable structured outputs, models can generally perform better when asked to produce json outputs within the prompt (`messages` field).
534550
- Most models tend to perform better when using lower temperature for structured outputs.
535-
- Some model providers may recommend specific system prompts and parameters to use for structured outputs and function calling. Don't hesitate to visit the model pages to dive deeper into model specifics ([example for Llama 3.3 on HuggingFace](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct)).
551+
- Some model providers may recommend specific system prompts and parameters to use for structured outputs and function calling. Don't hesitate to visit the model pages to dive deeper into model specifics ([An example for Llama 3.3 on HuggingFace](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct)).
536552
537553
## Conclusion
538554
539-
In this guide, we have explained how to use Structured Output with the [AI Endpoints](https://endpoints.ai.cloud.ovh.net/) models.
540-
We have provided a comprehensive overview of the feature which can help you perfect your integration of LLM for your own application.
555+
In this guide, we have explained how to use Structured Output with the [AI Endpoints](https://endpoints.ai.cloud.ovh.net/) models. We have provided a comprehensive overview of the feature which can help you perfect your integration of LLM for your own application.
541556
542557
## Go further
543558
@@ -551,5 +566,4 @@ If you need training or technical assistance to implement our solutions, contact
551566
552567
Please send us your questions, feedback and suggestions to improve the service:
553568
554-
- On the OVHcloud [Discord server](https://discord.gg/ovhcloud)
555-
569+
- On the OVHcloud [Discord server](https://discord.gg/ovhcloud).

0 commit comments

Comments
 (0)