Skip to content

Commit 3eb73ee

Browse files
committed
Correct that DPO is only for gpt-4o currently.
1 parent 8753771 commit 3eb73ee

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/ai-services/openai/includes/fine-tuning-studio.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ You should now see the **Create a fine-tuned model** dialog.
113113
The first step is to confirm you model choice and the training method. Not all models support all training methods.
114114

115115
- **Supervised Fine Tuning** (SFT): supported by all non-reasoning models.
116-
- **Direct Preference Optimization (Preview)** ([DPO](../how-to/fine-tuning-direct-preference-optimization.md)): supported by GPT-4o and GPT-4.1.
116+
- **Direct Preference Optimization (Preview)** ([DPO](../how-to/fine-tuning-direct-preference-optimization.md)): supported by GPT-4o.
117117
- **Reinforcement Fine Tuning (Preview)** (RFT): supported by reasoning models, like o4-mini.
118118

119119
When selecting the model, you may also select a [previously fine-tuned model](#continuous-fine-tuning).

0 commit comments

Comments
 (0)