You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/includes/fine-tuning-studio.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -90,7 +90,7 @@ The more training examples you have, the better. Fine-tuning jobs will not proce
90
90
91
91
In general, doubling the dataset size can lead to a linear increase in model quality. But keep in mind, low quality examples can negatively impact performance. If you train the model on a large amount of internal data, without first pruning the dataset for only the highest quality examples you could end up with a model that performs much worse than expected.
92
92
93
-
## Use the Create a fine-tuned model dialog
93
+
## Creating a fine-tuned model
94
94
95
95
Azure AI Foundry portal provides the **Create a fine-tuned model** dialog, so in one place you can easily create and train a fine-tuned model for your Azure resource.
96
96
@@ -113,7 +113,7 @@ You should now see the **Create a fine-tuned model** dialog.
113
113
The first step is to confirm you model choice and the training method. Not all models support all training methods.
114
114
115
115
-**Supervised Fine Tuning** (SFT): supported by all non-reasoning models.
116
-
-**Direct Preference Optimizatio** ([DPO](../how-to/fine-tuning-direct-preference-optimization.md): supported by GPT-4o and GPT-4.1.
116
+
-**Direct Preference Optimizatio** ([DPO](../how-to/fine-tuning-direct-preference-optimization.md)): supported by GPT-4o and GPT-4.1.
117
117
-**Reinforcement Fine Tuning** (RFT): supported by reasoning models, like o4-mini.
118
118
119
119
When selecting the model, you may also select a [previously fine-tuned model](#continuous-fine-tuning).
@@ -240,7 +240,7 @@ After your fine-tuned model deploys, you can use it like any other deployed mode
240
240
241
241
Once you have created a fine-tuned model you may wish to continue to refine the model over time through further fine-tuning. Continuous fine-tuning is the iterative process of selecting an already fine-tuned model as a base model and fine-tuning it further on new sets of training examples.
242
242
243
-
To perform fine-tuning on a model that you have previously fine-tuned you would use the same process as described in [create a customized model](#use-the-create-custom-model-wizard) but instead of specifying the name of a generic base model you would specify your already fine-tuned model. A custom fine-tuned model would look like `gpt-4.1-2025-04-14.ft-5fd1918ee65d4cd38a5dcf6835066ed7`
243
+
To perform fine-tuning on a model that you have previously fine-tuned you would use the same process as described in [creating a fine-tuned model](#creating-a-fine-tuned-model) but instead of specifying the name of a generic base model you would specify your already fine-tuned model. A custom fine-tuned model would look like `gpt-4.1-2025-04-14.ft-5fd1918ee65d4cd38a5dcf6835066ed7`
244
244
245
245
:::image type="content" source="../media/fine-tuning/studio-continuous.png" alt-text="Screenshot of the Create a custom model UI with a fine-tuned model highlighted." lightbox="../media/fine-tuning/studio-continuous.png":::
0 commit comments