You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/includes/fine-tuning-studio.md
+12Lines changed: 12 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,6 +36,7 @@ Take a moment to review the fine-tuning workflow for using Azure AI Foundry port
36
36
1.[Choose your training data](#choose-your-training-data).
37
37
1. Optionally, [choose your validation data](#choose-your-validation-data-optional).
38
38
1. Optionally, [configure task parameters](#configure-training-parameters-optional) for your fine-tuning job.
39
+
1. Optionally, [enable auto-deployment](#enable-auto-deployment-optional) for the resulting custom model.
39
40
1.[Review your choices and train your new custom model](#review-your-choices-and-train-your-model).
40
41
1. Check the status of your custom fine-tuned model.
41
42
1. Deploy your custom model for use.
@@ -179,6 +180,8 @@ You may provide an optional **seed** and tune additional hyperparameters.
179
180
180
181
The **seed** controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed isn't specified, one will be randomly generated for you.
181
182
183
+
:::image type="content" source="../media/fine-tuning/studio-create-hyperparams.png" alt-text="Close up screenshot of the parameters section of the Create custom model wizard in Azure AI Foundry portal.":::
184
+
182
185
The following hyperparameters are available for tuning via the Azure AI Foundry portal:
183
186
184
187
|**Name**|**Type**|**Description**|
@@ -187,6 +190,15 @@ The following hyperparameters are available for tuning via the Azure AI Foundry
187
190
|**Learning Rate Multiplier**| number | The learning rate multiplier to use for training. The fine-tuning learning rate is the original learning rate used for pre-training multiplied by this value. Larger learning rates tend to perform better with larger batch sizes. We recommend experimenting with values in the range 0.02 to 0.2 to see what produces the best results. A smaller learning rate may be useful to avoid overfitting. |
188
191
|**Number of Epochs**| integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
189
192
193
+
### Enable auto deployment (optional)
194
+
195
+
To save time, you can optionally enable auto-deployment for your resulting model. If training completes successfully, the model will be deployed using the selected [deployment type](../how-to/deployment-types.md). The deployment will be named based on the unique name generated for your custom model and the optional **suffix** you may have provided [earlier](#make-your-model-identifiable-optional).
196
+
197
+
:::image type="content" source="../media/fine-tuning/studio-create-auto-deploy.png" alt-text="Screenshot of the Validation data pane for the Create custom model wizard in Azure AI Foundry portal.":::
198
+
199
+
> [!NOTE]
200
+
> Only Global Standard and Developer deployments are currently supported for auto-deployment. Neither of these options provide [data residency](https://aka.ms/data-residency). Consult the [deployment type](../how-to/deployment-types.md) documentation for more details.
201
+
190
202
### Review your choices and train your model
191
203
192
204
Review your choices and select **Submit** to start training your new fine-tuned model.
0 commit comments