Skip to content

Commit 1c1c63e

Browse files
committed
Add global training example to REST API docs.
1 parent c2857d5 commit 1c1c63e

File tree

1 file changed

+20
-2
lines changed

1 file changed

+20
-2
lines changed

articles/ai-foundry/openai/includes/fine-tuning-rest.md

Lines changed: 20 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -129,15 +129,30 @@ curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs?api-version=2024-10-
129129
-H "api-key: $AZURE_OPENAI_API_KEY" \
130130
-d '{
131131
"model": "gpt-4.1-2025-04-14",
132-
"training_file": "<TRAINING_FILE_ID>",
132+
"training_file": "<TRAINING_FILE_ID>",
133133
"validation_file": "<VALIDATION_FILE_ID>",
134134
"seed": 105
135135
}'
136136
```
137137

138+
If you are fine tuning a model that supports [Global Training](../includes/fine-tune-models.md), you can specify the training type by using the `extra_body` named argument and using api-version `2025-04-01-preview`:
139+
140+
```bash
141+
curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs?api-version=2025-04-01-preview \
142+
-H "Content-Type: application/json" \
143+
-H "api-key: $AZURE_OPENAI_API_KEY" \
144+
-d '{
145+
"model": "gpt-4.1-2025-04-14",
146+
"training_file": "<TRAINING_FILE_ID>",
147+
"validation_file": "<VALIDATION_FILE_ID>",
148+
"seed": 105,
149+
"trainingType": "globalstandard"
150+
}'
151+
```
152+
138153
You can also pass additional optional parameters like [hyperparameters](/rest/api/azureopenai/fine-tuning/create?view=rest-azureopenai-2023-12-01-preview&tabs=HTTP#finetuninghyperparameters&preserve-view=true) to take greater control of the fine-tuning process. For initial training we recommend using the automatic defaults that are present without specifying these parameters.
139154

140-
The current supported hyperparameters for fine-tuning are:
155+
The current supported hyperparameters for Supervised Fine-Tuning are:
141156

142157
|**Name**| **Type**| **Description**|
143158
|---|---|---|
@@ -146,6 +161,9 @@ The current supported hyperparameters for fine-tuning are:
146161
|`n_epochs` | integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
147162
|`seed` | integer | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed isn't specified, one will be generated for you. |
148163

164+
> [!NOTE]
165+
> See the guides for [Direct Preference Optimization](../how-to/fine-tuning-direct-preference-optimization.md) and [Reinforcement Fine-Tuning](../how-to/reinforcement-fine-tuning.md) to learn more about their supported hyperparameters.
166+
149167
## Check the status of your customized model
150168

151169
After you start a fine-tune job, it can take some time to complete. Your job might be queued behind other jobs in the system. Training your model can take minutes or hours depending on the model and dataset size. The following example uses the REST API to check the status of your fine-tuning job. The example retrieves information about your job by using the job ID returned from the previous example:

0 commit comments

Comments
 (0)