You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/includes/fine-tuning-rest.md
+20-2Lines changed: 20 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -129,15 +129,30 @@ curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs?api-version=2024-10-
129
129
-H "api-key: $AZURE_OPENAI_API_KEY" \
130
130
-d '{
131
131
"model": "gpt-4.1-2025-04-14",
132
-
"training_file": "<TRAINING_FILE_ID>",
132
+
"training_file": "<TRAINING_FILE_ID>",
133
133
"validation_file": "<VALIDATION_FILE_ID>",
134
134
"seed": 105
135
135
}'
136
136
```
137
137
138
+
If you are fine tuning a model that supports [Global Training](../includes/fine-tune-models.md), you can specify the training type by using the `extra_body` named argument and using api-version `2025-04-01-preview`:
139
+
140
+
```bash
141
+
curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs?api-version=2025-04-01-preview \
142
+
-H "Content-Type: application/json" \
143
+
-H "api-key: $AZURE_OPENAI_API_KEY" \
144
+
-d '{
145
+
"model": "gpt-4.1-2025-04-14",
146
+
"training_file": "<TRAINING_FILE_ID>",
147
+
"validation_file": "<VALIDATION_FILE_ID>",
148
+
"seed": 105,
149
+
"trainingType": "globalstandard"
150
+
}'
151
+
```
152
+
138
153
You can also pass additional optional parameters like [hyperparameters](/rest/api/azureopenai/fine-tuning/create?view=rest-azureopenai-2023-12-01-preview&tabs=HTTP#finetuninghyperparameters&preserve-view=true) to take greater control of the fine-tuning process. For initial training we recommend using the automatic defaults that are present without specifying these parameters.
139
154
140
-
The current supported hyperparameters for fine-tuning are:
155
+
The current supported hyperparameters for Supervised Fine-Tuning are:
141
156
142
157
|**Name**|**Type**|**Description**|
143
158
|---|---|---|
@@ -146,6 +161,9 @@ The current supported hyperparameters for fine-tuning are:
146
161
|`n_epochs`| integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
147
162
|`seed`| integer | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed isn't specified, one will be generated for you. |
148
163
164
+
> [!NOTE]
165
+
> See the guides for [Direct Preference Optimization](../how-to/fine-tuning-direct-preference-optimization.md) and [Reinforcement Fine-Tuning](../how-to/reinforcement-fine-tuning.md) to learn more about their supported hyperparameters.
166
+
149
167
## Check the status of your customized model
150
168
151
169
After you start a fine-tune job, it can take some time to complete. Your job might be queued behind other jobs in the system. Training your model can take minutes or hours depending on the model and dataset size. The following example uses the REST API to check the status of your fine-tuning job. The example retrieves information about your job by using the job ID returned from the previous example:
0 commit comments