You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/how-to/fine-tune-serverless.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -67,9 +67,9 @@ You can also go to the Azure AI Foundry portal to view all models that contain f
67
67
68
68
## Prepare data for fine-tuning
69
69
70
-
Prepare your training and validation data to fine-tune your model. Your training data and validation data sets consist of input and output examples for how you would like the model to perform.
70
+
Prepare your training and validation data to fine-tune your model. Your training and validation data consist of input and output examples for how you would like the model to perform.
71
71
72
-
Make sure all your training examples follow the expected format for inference. To fine-tune models effectively, ensure a balanced and diverse dataset by maintaining data balance, including various scenarios, and periodically refining training data to align with real-world expectations. These actions ultimately lead to more accurate and balanced model responses.
72
+
Make sure all your training examples follow the expected format for inference. To fine-tune models effectively, ensure a diverse dataset by maintaining data balance, including various scenarios, and periodically refining training data to align with real-world expectations. These actions ultimately lead to more accurate and balanced model responses.
73
73
74
74
> [!TIP]
75
75
> Different model types require a different format of training data.
@@ -180,7 +180,7 @@ The **Fine-tune model** wizard shows the parameters for training your fine-tuned
180
180
|`n_epochs`| integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
181
181
182
182
Select **Default** to use the default values for the fine-tuning job, or select **Custom** to display and edit the hyperparameter values. When defaults are selected, we determine the correct value algorithmically based on your training data.
183
-
After you configure the advanced options, select **Next** to [review your choices and train your fine-tuned model](#review-your-choices-and-train-your-model)
183
+
After you configure the advanced options, select **Next** to [review your choices and train your fine-tuned model](#review-your-choices-and-train-your-model).
184
184
185
185
### Review your choices and train your model
186
186
@@ -201,7 +201,7 @@ Here are some of the tasks you can do on the **Models** tab:
201
201
202
202
- Check the status of the fine-tuning job for your custom model in the Status column of the Customized models tab.
203
203
- In the Model name column, select the model’s name to view more information about the custom model. You can see the status of the fine-tuning job, training results, training events, and hyperparameters used in the job.
204
-
- Select Refresh to update the information on the page.
204
+
- Select **Refresh** to update the information on the page.
205
205
206
206
---
207
207
@@ -334,7 +334,7 @@ outputs:
334
334
type: mlflow_model
335
335
```
336
336
337
-
The training data used is the same as demonstrated in the SDK notebook. The CLI employs the available Azure AI models for the chat-completion task. If you prefer to use a different model than the one in the CLI sample, you can update the arguments, such as 'model path,' accordingly
337
+
The training data used is the same as demonstrated in the SDK notebook. The CLI employs the available Azure AI models for the chat-completion task. If you prefer to use a different model than the one in the CLI sample, you can update the arguments, such as 'model path,' accordingly.
0 commit comments