You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/fine-tuning.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -137,17 +137,17 @@ Training datasets must be in `jsonl` format:
137
137
138
138
### Direct preference optimization model support
139
139
140
-
-`gpt-4o-2024-08-06` supports direct preference optimization in its respective fine-tuning regions. Latest of region availability is updated here in[models page](../concepts/models.md#fine-tuning-models)
140
+
-`gpt-4o-2024-08-06` supports direct preference optimization in its respective fine-tuning regions. Latest region availability is updated in the[models page](../concepts/models.md#fine-tuning-models)
141
141
142
142
Users can use preference fine tuning with base models as well as models that have already been fine-tuned using supervised fine-tuning as long as they are of a supported model/version.
143
143
144
144
### How to use direct preference optimization fine-tuning?
145
145
146
146
1. Prepare `jsonl` datasets in the [preference format](#direct-preference-optimization-dataset-format).
147
-
2. Select the model and then select the method of customization **Direct Preference Optimization**
147
+
2. Select the model and then select the method of customization **Direct Preference Optimization**.
148
148
3. Upload datasets – training and validation. Preview as needed.
149
149
4. Select hyperparameters, defaults are recommended for initial experimentation.
150
-
5. Review the selections and create fine tuning job.
150
+
5. Review the selections and create a fine tuning job.
0 commit comments