You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/includes/fine-tune-models.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ ms.custom:
18
18
> The supported regions for fine-tuning might vary if you use Azure OpenAI models in an Azure AI Foundry project versus outside a project.
19
19
>
20
20
21
-
| Model ID | Standard training regions | Global training (preview) | Max request (tokens) | Training data (up to) | Modality |
21
+
| Model ID | Standard training regions | Global training | Max request (tokens) | Training data (up to) | Modality |
22
22
| --- | --- | :---: | :---: | :---: | --- |
23
23
|`gpt-35-turbo` <br> (1106) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | - | Input: 16,385<br> Output: 4,096 | Sep 2021 | Text to text |
24
24
|`gpt-35-turbo` <br> (0125) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | - | 16,385 | Sep 2021 | Text to text |
@@ -30,7 +30,7 @@ ms.custom:
30
30
|`o4-mini` <br> (2025-04-16) | East US2 <br> Sweden Central | - | Input: 128,000 <br> Output: 16,384 <br> Training example context length: 65,536 | May 2024 | Text to text |
31
31
32
32
> [!NOTE]
33
-
> Global training (in preview) provides [more affordable](https://aka.ms/aoai-pricing) training per token, but doesn't offer [data residency](https://aka.ms/data-residency). It's currently available to Azure OpenAI resources in the following regions, with more regions coming soon:
33
+
> Global training provides [more affordable](https://aka.ms/aoai-pricing) training per token, but doesn't offer [data residency](https://aka.ms/data-residency). It's currently available to Azure OpenAI resources in the following regions:
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/includes/fine-tuning-python.md
+34-9Lines changed: 34 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -139,14 +139,13 @@ After you upload your training and validation files, you're ready to start the f
139
139
140
140
The following Python code shows an example of how to create a new fine-tune job with the Python SDK:
141
141
142
-
In this example we are also passing the seed parameter. The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed isn't specified, one will be generated for you.
143
-
144
142
```python
145
143
response = client.fine_tuning.jobs.create(
146
144
training_file=training_file_id,
147
145
validation_file=validation_file_id,
148
-
model="gpt-4.1-2025-04-14", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
149
-
seed=105# seed parameter controls reproducibility of the fine-tuning job. If no seed is specified one will be generated automatically.
146
+
model="gpt-4.1-2025-04-14", # Enter base model name.
147
+
suffix="my-model", # Custom suffix for naming the resulting model. Note that in Azure OpenAI the model cannot contain dot/period characters.
148
+
seed=105, # seed parameter controls reproducibility of the fine-tuning job. If no seed is specified one will be generated automatically.
If you are fine tuning a model that supports [Global Training](../concepts/models.md#fine-tuning-models), you can specify the training type by using the `extra_body` named argument:
162
+
163
+
```python
164
+
response = client.fine_tuning.jobs.create(
165
+
training_file=training_file_id,
166
+
validation_file=validation_file_id,
167
+
model="gpt-4.1-2025-04-14",
168
+
suffix="my-model",
169
+
seed=105,
170
+
extra_body={ "trainingType": "globalstandard" }
171
+
)
172
+
173
+
job_id = response.id
174
+
```
175
+
162
176
You can also pass additional optional parameters like hyperparameters to take greater control of the fine-tuning process. For initial training we recommend using the automatic defaults that are present without specifying these parameters.
163
177
164
-
The current supported hyperparameters for fine-tuning are:
178
+
The current supported hyperparameters for Supervised Fine-Tuning are:
165
179
166
180
|**Name**|**Type**|**Description**|
167
181
|---|---|---|
@@ -170,7 +184,7 @@ The current supported hyperparameters for fine-tuning are:
170
184
|`n_epochs`| integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
171
185
|`seed`| integer | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed isn't specified, one will be generated for you. |
172
186
173
-
To set custom hyperparameters with the 1.x version of the OpenAI Python API:
187
+
To set custom hyperparameters with the 1.x version of the OpenAI Python API, provide them as part of the `method`:
174
188
175
189
```python
176
190
from openai import OpenAI
@@ -182,13 +196,24 @@ client = OpenAI(
182
196
183
197
client.fine_tuning.jobs.create(
184
198
training_file="file-abc123",
185
-
model="gpt-4.1-2025-04-14", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
186
-
hyperparameters={
187
-
"n_epochs":2
199
+
model="gpt-4.1-2025-04-14",
200
+
suffix="my-model",
201
+
seed=105,
202
+
method={
203
+
"type": "supervised", # In this case, the job will be using Supervised Fine Tuning.
204
+
"supervised": {
205
+
"hyperparameters": {
206
+
"n_epochs": 2
207
+
}
208
+
}
188
209
}
189
210
)
190
211
```
191
212
213
+
> [!NOTE]
214
+
> See the guides for [Direct Preference Optimization](../how-to/fine-tuning-direct-preference-optimization.md) and [Reinforcement Fine-Tuning](../how-to/reinforcement-fine-tuning.md) to learn more about their supported hyperparameters.
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/includes/fine-tuning-rest.md
+22-4Lines changed: 22 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -119,7 +119,7 @@ curl -X POST $AZURE_OPENAI_ENDPOINT/openai/v1/files \
119
119
120
120
## Create a customized model
121
121
122
-
After you uploaded your training and validation files, you're ready to start the fine-tuning job. The following code shows an example of how to [create a new fine-tuning job](/rest/api/azureopenai/fine-tuning/create?view=rest-azureopenai-2023-12-01-preview&tabs=HTTP&preserve-view=true) with the REST API.
122
+
After you uploaded your training and validation files, you're ready to start the fine-tuning job. The following code shows an example of how to [create a new fine-tuning job](/rest/api/azureopenai/fine-tuning/create?view=rest-azureopenai-2024-10-21&tabs=HTTP&preserve-view=true) with the REST API.
123
123
124
124
In this example we are also passing the seed parameter. The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but can differ in rare cases. If a seed is not specified, one will be generated for you.
125
125
@@ -129,15 +129,30 @@ curl -X POST $AZURE_OPENAI_ENDPOINT/openai/v1/fine_tuning/jobs \
129
129
-H "api-key: $AZURE_OPENAI_API_KEY" \
130
130
-d '{
131
131
"model": "gpt-4.1-2025-04-14",
132
-
"training_file": "<TRAINING_FILE_ID>",
132
+
"training_file": "<TRAINING_FILE_ID>",
133
133
"validation_file": "<VALIDATION_FILE_ID>",
134
134
"seed": 105
135
135
}'
136
136
```
137
137
138
-
You can also pass additional optional parameters like [hyperparameters](/rest/api/azureopenai/fine-tuning/create?view=rest-azureopenai-2023-12-01-preview&tabs=HTTP#finetuninghyperparameters&preserve-view=true) to take greater control of the fine-tuning process. For initial training we recommend using the automatic defaults that are present without specifying these parameters.
138
+
If you are fine tuning a model that supports [Global Training](../concepts/models.md#fine-tuning-models), you can specify the training type by using the `extra_body` named argument and using api-version `2025-04-01-preview`:
139
+
140
+
```bash
141
+
curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs?api-version=2025-04-01-preview \
142
+
-H "Content-Type: application/json" \
143
+
-H "api-key: $AZURE_OPENAI_API_KEY" \
144
+
-d '{
145
+
"model": "gpt-4.1-2025-04-14",
146
+
"training_file": "<TRAINING_FILE_ID>",
147
+
"validation_file": "<VALIDATION_FILE_ID>",
148
+
"seed": 105,
149
+
"trainingType": "globalstandard"
150
+
}'
151
+
```
152
+
153
+
You can also pass additional optional parameters like [hyperparameters](/rest/api/azureopenai/fine-tuning/create?view=rest-azureopenai-2024-10-21&tabs=HTTP#finetuninghyperparameters&preserve-view=true) to take greater control of the fine-tuning process. For initial training we recommend using the automatic defaults that are present without specifying these parameters.
139
154
140
-
The current supported hyperparameters for fine-tuning are:
155
+
The current supported hyperparameters for Supervised Fine-Tuning are:
141
156
142
157
|**Name**|**Type**|**Description**|
143
158
|---|---|---|
@@ -146,6 +161,9 @@ The current supported hyperparameters for fine-tuning are:
146
161
|`n_epochs`| integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
147
162
|`seed`| integer | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed isn't specified, one will be generated for you. |
148
163
164
+
> [!NOTE]
165
+
> See the guides for [Direct Preference Optimization](../how-to/fine-tuning-direct-preference-optimization.md) and [Reinforcement Fine-Tuning](../how-to/reinforcement-fine-tuning.md) to learn more about their supported hyperparameters.
166
+
149
167
## Check the status of your customized model
150
168
151
169
After you start a fine-tune job, it can take some time to complete. Your job might be queued behind other jobs in the system. Training your model can take minutes or hours depending on the model and dataset size. The following example uses the REST API to check the status of your fine-tuning job. The example retrieves information about your job by using the job ID returned from the previous example:
0 commit comments