Skip to content

Commit a2f8153

Browse files
authored
Update fine-tuning-overview.md
1 parent c2fce46 commit a2f8153

File tree

1 file changed

+1
-8
lines changed

1 file changed

+1
-8
lines changed

articles/ai-foundry/concepts/fine-tuning-overview.md

Lines changed: 1 addition & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ To fine-tune a model for chat or question answering, your training dataset shoul
4141
- **Human-generated responses**: Use responses written by humans to teach the model how to generate natural and accurate replies.
4242
- **Formatting**: Use a clear structure to separate prompts and responses. For example, `\n\n###\n\n` and ensure the delimiter doesn't appear in the content.
4343

44-
## Model selection
44+
### Model selection
4545

4646
Selecting the right model for fine-tuning is a critical decision that impacts performance, efficiency, and cost. Before making a choice, it is essential to clearly define the task and establish the desired performance metrics. A well-defined task ensures that the selected model aligns with specific requirements, optimizing effort and resources.
4747

@@ -55,12 +55,6 @@ Model training can be guided by metrics. For example, BLEU-4 was used to evaluat
5555

5656
**Use intermediate checkpoints for better model selection**. Save checkpoints at regular intervals (e.g., every few epochs) and evaluate their performance. In some cases, an intermediate checkpoint may outperform the final model, allowing you to select the best version rather than relying solely on the last trained iteration.
5757

58-
## Deployment and monitoring
59-
60-
- Choose a suitable deployment infrastructure, such as cloud-based platforms or on-premises servers.
61-
- Continuously monitor the model's performance and make necessary adjustments to ensure optimal performance.
62-
- Consider regional deployment needs and latency requirements to meet enterprise SLAs. Implement security guardrails, such as private links, encryption, and access controls, to protect sensitive data and maintain compliance with organizational policies.
63-
6458
## Supported models for fine-tuning
6559

6660
Now that you know when to use fine-tuning for your use case, you can go to Azure AI Foundry to find models available to fine-tune.
@@ -72,7 +66,6 @@ For more information on fine-tuning using a managed compute (preview), see [Fine
7266

7367
For details about Azure OpenAI in Azure AI Foundry Models that are available for fine-tuning, see the [Azure OpenAI in Foundry Models documentation.](../../ai-services/openai/concepts/models.md#fine-tuning-models)
7468

75-
7669
## Best practices for fine-tuning
7770

7871
Here are some best practices that can help improve the efficiency and effectiveness of fine-tuning LLMs for various applications:

0 commit comments

Comments
 (0)