Skip to content

Commit ac1f49f

Browse files
authored
Update fine-tuning-rest.md
adding 4o, seed, updating safety
1 parent f131f6a commit ac1f49f

File tree

1 file changed

+8
-2
lines changed

1 file changed

+8
-2
lines changed

articles/ai-services/openai/includes/fine-tuning-rest.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,13 +31,16 @@ The following models support fine-tuning:
3131
- `gpt-35-turbo` (1106)
3232
- `gpt-35-turbo` (0125)
3333
- `gpt-4` (0613)**<sup>*</sup>**
34+
- `gpt-4o` (2024-08-06)**<sup>*</sup>**
3435
- `gpt-4o-mini` (2024-07-18)**<sup>*</sup>**
3536

3637
**<sup>*</sup>** Fine-tuning for this model is currently in public preview.
3738

39+
Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.
40+
3841
Consult the [models page](../concepts/models.md#fine-tuning-models) to check which regions currently support fine-tuning.
3942

40-
If you plan to use `gpt-4` for fine-tuning, please refer to the [GPT-4 public preview safety evaluation guidance](#safety-evaluation-gpt-4-fine-tuning---public-preview).
43+
4144

4245
## Review the workflow for the REST API
4346

@@ -153,6 +156,8 @@ You can create a custom model from one of the following available base models:
153156
- `gpt-35-turbo` (1106)
154157
- `gpt-35-turbo` (0125)
155158
- `gpt-4` (0613)
159+
- `gpt-4o` (2024-08-06)
160+
- `gpt-4o-mini` (2023-07-18)
156161

157162
Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.
158163

@@ -216,6 +221,7 @@ The current supported hyperparameters for fine-tuning are:
216221
|`batch_size` |integer | The batch size to use for training. The batch size is the number of training examples used to train a single forward and backward pass. In general, we've found that larger batch sizes tend to work better for larger datasets. The default value as well as the maximum value for this property are specific to a base model. A larger batch size means that model parameters are updated less frequently, but with lower variance. |
217222
| `learning_rate_multiplier` | number | The learning rate multiplier to use for training. The fine-tuning learning rate is the original learning rate used for pre-training multiplied by this value. Larger learning rates tend to perform better with larger batch sizes. We recommend experimenting with values in the range 0.02 to 0.2 to see what produces the best results. A smaller learning rate can be useful to avoid overfitting. |
218223
|`n_epochs` | integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
224+
|`seed` | integer | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed isn't specified, one will be generated for you. |
219225

220226
## Check the status of your customized model
221227

@@ -248,7 +254,7 @@ curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs/{fine_tuning_job_id}
248254
-H "api-key: $AZURE_OPENAI_API_KEY"
249255
```
250256

251-
## Safety evaluation GPT-4 fine-tuning - public preview
257+
## Safety evaluation GPT-4, 4o, 4o-mini fine-tuning - public preview
252258

253259
[!INCLUDE [Safety evaluation](../includes/safety-evaluation.md)]
254260

0 commit comments

Comments
 (0)