You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/includes/fine-tuning-rest.md
+8-2Lines changed: 8 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,13 +31,16 @@ The following models support fine-tuning:
31
31
-`gpt-35-turbo` (1106)
32
32
-`gpt-35-turbo` (0125)
33
33
-`gpt-4` (0613)**<sup>*</sup>**
34
+
-`gpt-4o` (2024-08-06)**<sup>*</sup>**
34
35
-`gpt-4o-mini` (2024-07-18)**<sup>*</sup>**
35
36
36
37
**<sup>*</sup>** Fine-tuning for this model is currently in public preview.
37
38
39
+
Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.
40
+
38
41
Consult the [models page](../concepts/models.md#fine-tuning-models) to check which regions currently support fine-tuning.
39
42
40
-
If you plan to use `gpt-4` for fine-tuning, please refer to the [GPT-4 public preview safety evaluation guidance](#safety-evaluation-gpt-4-fine-tuning---public-preview).
43
+
41
44
42
45
## Review the workflow for the REST API
43
46
@@ -153,6 +156,8 @@ You can create a custom model from one of the following available base models:
153
156
-`gpt-35-turbo` (1106)
154
157
-`gpt-35-turbo` (0125)
155
158
-`gpt-4` (0613)
159
+
-`gpt-4o` (2024-08-06)
160
+
-`gpt-4o-mini` (2023-07-18)
156
161
157
162
Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.
158
163
@@ -216,6 +221,7 @@ The current supported hyperparameters for fine-tuning are:
216
221
|`batch_size`|integer | The batch size to use for training. The batch size is the number of training examples used to train a single forward and backward pass. In general, we've found that larger batch sizes tend to work better for larger datasets. The default value as well as the maximum value for this property are specific to a base model. A larger batch size means that model parameters are updated less frequently, but with lower variance. |
217
222
|`learning_rate_multiplier`| number | The learning rate multiplier to use for training. The fine-tuning learning rate is the original learning rate used for pre-training multiplied by this value. Larger learning rates tend to perform better with larger batch sizes. We recommend experimenting with values in the range 0.02 to 0.2 to see what produces the best results. A smaller learning rate can be useful to avoid overfitting. |
218
223
|`n_epochs`| integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
224
+
|`seed`| integer | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed isn't specified, one will be generated for you. |
219
225
220
226
## Check the status of your customized model
221
227
@@ -248,7 +254,7 @@ curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs/{fine_tuning_job_id}
248
254
-H "api-key: $AZURE_OPENAI_API_KEY"
249
255
```
250
256
251
-
## Safety evaluation GPT-4 fine-tuning - public preview
257
+
## Safety evaluation GPT-4, 4o, 4o-mini fine-tuning - public preview
0 commit comments