Skip to content

Commit eaa3ff8

Browse files
authored
Update headings and text in documentation
1 parent 8cd3aea commit eaa3ff8

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/ai-foundry/how-to/fine-tune-serverless.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@ After you select and upload the training dataset, select **Next** to continue.
145145
The next step provides options to configure the model to use validation data in the training process. If you don't want to use validation data, you can choose **Next** to continue to the advanced options for the model. Otherwise, if you have a validation dataset, you can either choose existing prepared validation data or upload new prepared validation data to use when customizing your model.
146146
The **Validation data** pane displays any existing, previously uploaded training and validation datasets and provides options by which you can upload new validation data.
147147

148-
### Automatic Split of Training Data
148+
### Split training data
149149
You can automatically divide your training data to generate a validation dataset.
150150
After you select Automatic split of training data, select **Next** to continue.
151151

@@ -297,7 +297,7 @@ For more information on how to track costs, see [Monitor costs for models offere
297297

298298
:::image type="content" source="../media/deploy-monitor/serverless/costs-model-as-service-cost-details.png" alt-text="A screenshot showing different resources corresponding to different model offers and their associated meters." lightbox="../media/deploy-monitor/serverless/costs-model-as-service-cost-details.png":::
299299

300-
## Sample Notebook
300+
## Sample notebook
301301

302302
You can use this [sample notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/finetuning/standalone/model-as-a-service/chat-completion/chat_completion_with_model_as_service.ipynb) to create a standalone fine-tuning job to enhance a model's ability to summarize dialogues between two people using the Samsum dataset. The training data utilized is the ultrachat_200k dataset, which is divided into four splits suitable for supervised fine-tuning (sft) and generation ranking (gen). The notebook employs the available Azure AI models for the chat-completion task (If you would like to use a different model than what's used in the notebook, you can replace the model name). The notebook includes setting up prerequisites, selecting a model to fine-tune, creating training and validation datasets, configuring and submitting the fine-tuning job, and finally, creating a serverless deployment using the fine-tuned model for sample inference.
303303

0 commit comments

Comments
 (0)