You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/includes/fine-tuning-openai-in-ai-studio.md
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,13 +32,14 @@ The following models support fine-tuning:
32
32
-`gpt-35-turbo` (1106)
33
33
-`gpt-35-turbo` (0125)
34
34
-`gpt-4` (0613)**<sup>*</sup>**
35
+
-`gpt-4o` (2024-08-06)**<sup>*</sup>**
35
36
-`gpt-4o-mini` (2024-07-18)**<sup>*</sup>**
36
37
37
38
**<sup>*</sup>** Fine-tuning for this model is currently in public preview.
38
39
39
-
Consult the [models page](../concepts/models.md#fine-tuning-models) to check which regions currently support fine-tuning.
40
+
Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.
40
41
41
-
If you plan to use `gpt-4` for fine-tuning, please refer to the [GPT-4 public preview safety evaluation guidance](#safety-evaluation-gpt-4-fine-tuning---public-preview)
42
+
Consult the [models page](../concepts/models.md#fine-tuning-models)to check which regions currently support fine-tuning.
42
43
43
44
## Review the workflow for Azure AI Studio
44
45
@@ -253,7 +254,7 @@ When each training epoch completes a checkpoint is generated. A checkpoint is a
253
254
254
255
:::image type="content" source="../media/fine-tuning/checkpoints.png" alt-text="Screenshot of checkpoints UI." lightbox="../media/fine-tuning/checkpoints.png":::
255
256
256
-
## Safety evaluation GPT-4 fine-tuning - public preview
257
+
## Safety evaluation GPT-4, GPT-4o, GPT-4o-mini fine-tuning - public preview
Copy file name to clipboardExpand all lines: articles/ai-services/openai/includes/fine-tuning-python.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,13 +32,12 @@ The following models support fine-tuning:
32
32
-`gpt-35-turbo` (1106)
33
33
-`gpt-35-turbo` (0125)
34
34
-`gpt-4` (0613)**<sup>*</sup>**
35
+
-`gpt-4o` (2024-08-06)**<sup>*</sup>**
35
36
-`gpt-4o-mini` (2024-07-18)**<sup>*</sup>**
36
37
37
38
**<sup>*</sup>** Fine-tuning for this model is currently in public preview.
38
39
39
-
If you plan to use `gpt-4` for fine-tuning, please refer to the [GPT-4 public preview safety evaluation guidance](#safety-evaluation-gpt-4-fine-tuning---public-preview)
40
-
41
-
Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.
40
+
Or you can fine tune a previously fine-tuned model, formatted as `base-model.ft-{jobid}`.
42
41
43
42
:::image type="content" source="../media/fine-tuning/models.png" alt-text="Screenshot of model options with a custom fine-tuned model." lightbox="../media/fine-tuning/models.png":::
44
43
@@ -287,6 +286,7 @@ The current supported hyperparameters for fine-tuning are:
287
286
|`batch_size`|integer | The batch size to use for training. The batch size is the number of training examples used to train a single forward and backward pass. In general, we've found that larger batch sizes tend to work better for larger datasets. The default value as well as the maximum value for this property are specific to a base model. A larger batch size means that model parameters are updated less frequently, but with lower variance. |
288
287
|`learning_rate_multiplier`| number | The learning rate multiplier to use for training. The fine-tuning learning rate is the original learning rate used for pre-training multiplied by this value. Larger learning rates tend to perform better with larger batch sizes. We recommend experimenting with values in the range 0.02 to 0.2 to see what produces the best results. A smaller learning rate can be useful to avoid overfitting. |
289
288
|`n_epochs`| integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
289
+
|`seed`| integer | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed isn't specified, one will be generated for you. |
290
290
291
291
To set custom hyperparameters with the 1.x version of the OpenAI Python API:
292
292
@@ -374,7 +374,7 @@ This command isn't available in the 0.28.1 OpenAI Python library. Upgrade to the
374
374
375
375
---
376
376
377
-
## Safety evaluation GPT-4 fine-tuning - public preview
377
+
## Safety evaluation GPT-4, GPT-4o, GPT-4o-mini fine-tuning - public preview
Copy file name to clipboardExpand all lines: articles/ai-services/openai/includes/fine-tuning-rest.md
+8-2Lines changed: 8 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,13 +31,16 @@ The following models support fine-tuning:
31
31
-`gpt-35-turbo` (1106)
32
32
-`gpt-35-turbo` (0125)
33
33
-`gpt-4` (0613)**<sup>*</sup>**
34
+
-`gpt-4o` (2024-08-06)**<sup>*</sup>**
34
35
-`gpt-4o-mini` (2024-07-18)**<sup>*</sup>**
35
36
36
37
**<sup>*</sup>** Fine-tuning for this model is currently in public preview.
37
38
39
+
Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.
40
+
38
41
Consult the [models page](../concepts/models.md#fine-tuning-models) to check which regions currently support fine-tuning.
39
42
40
-
If you plan to use `gpt-4` for fine-tuning, please refer to the [GPT-4 public preview safety evaluation guidance](#safety-evaluation-gpt-4-fine-tuning---public-preview).
43
+
41
44
42
45
## Review the workflow for the REST API
43
46
@@ -153,6 +156,8 @@ You can create a custom model from one of the following available base models:
153
156
-`gpt-35-turbo` (1106)
154
157
-`gpt-35-turbo` (0125)
155
158
-`gpt-4` (0613)
159
+
-`gpt-4o` (2024-08-06)
160
+
-`gpt-4o-mini` (2023-07-18)
156
161
157
162
Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.
158
163
@@ -216,6 +221,7 @@ The current supported hyperparameters for fine-tuning are:
216
221
|`batch_size`|integer | The batch size to use for training. The batch size is the number of training examples used to train a single forward and backward pass. In general, we've found that larger batch sizes tend to work better for larger datasets. The default value as well as the maximum value for this property are specific to a base model. A larger batch size means that model parameters are updated less frequently, but with lower variance. |
217
222
|`learning_rate_multiplier`| number | The learning rate multiplier to use for training. The fine-tuning learning rate is the original learning rate used for pre-training multiplied by this value. Larger learning rates tend to perform better with larger batch sizes. We recommend experimenting with values in the range 0.02 to 0.2 to see what produces the best results. A smaller learning rate can be useful to avoid overfitting. |
218
223
|`n_epochs`| integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
224
+
|`seed`| integer | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed isn't specified, one will be generated for you. |
219
225
220
226
## Check the status of your customized model
221
227
@@ -248,7 +254,7 @@ curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs/{fine_tuning_job_id}
248
254
-H "api-key: $AZURE_OPENAI_API_KEY"
249
255
```
250
256
251
-
## Safety evaluation GPT-4 fine-tuning - public preview
257
+
## Safety evaluation GPT-4, GPT-4o, GPT-4o-mini fine-tuning - public preview
Copy file name to clipboardExpand all lines: articles/ai-services/openai/includes/fine-tuning-studio.md
+5-2Lines changed: 5 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,13 +30,16 @@ The following models support fine-tuning:
30
30
-`gpt-35-turbo` (1106)
31
31
-`gpt-35-turbo` (0125)
32
32
-`gpt-4` (0613)**<sup>*</sup>**
33
+
-`gpt-4o` (2024-08-06)**<sup>*</sup>**
33
34
-`gpt-4o-mini` (2024-07-18)**<sup>*</sup>**
34
35
35
36
**<sup>*</sup>** Fine-tuning for this model is currently in public preview.
36
37
38
+
Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.
39
+
40
+
37
41
Consult the [models page](../concepts/models.md#fine-tuning-models) to check which regions currently support fine-tuning.
38
42
39
-
If you plan to use `gpt-4` for fine-tuning, please refer to the [GPT-4 public preview safety evaluation guidance](#safety-evaluation-gpt-4-fine-tuning---public-preview)
40
43
41
44
## Review the workflow for Azure OpenAI Studio
42
45
@@ -322,7 +325,7 @@ Here are some of the tasks you can do on the **Models** pane:
322
325
When each training epoch completes a checkpoint is generated. A checkpoint is a fully functional version of a model which can both be deployed and used as the target model for subsequent fine-tuning jobs. Checkpoints can be particularly useful, as they can provide a snapshot of your model prior to overfitting having occurred. When a fine-tuning job completes you will have the three most recent versions of the model available to deploy.
323
326
324
327
325
-
## Safety evaluation GPT-4 fine-tuning - public preview
328
+
## Safety evaluation GPT-4, GPT-4o, and GPT-4o-mini fine-tuning - public preview
0 commit comments