You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/concepts/fine-tuning-overview.md
+1-8Lines changed: 1 addition & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,7 +41,7 @@ To fine-tune a model for chat or question answering, your training dataset shoul
41
41
-**Human-generated responses**: Use responses written by humans to teach the model how to generate natural and accurate replies.
42
42
-**Formatting**: Use a clear structure to separate prompts and responses. For example, `\n\n###\n\n` and ensure the delimiter doesn't appear in the content.
43
43
44
-
## Model selection
44
+
###Model selection
45
45
46
46
Selecting the right model for fine-tuning is a critical decision that impacts performance, efficiency, and cost. Before making a choice, it is essential to clearly define the task and establish the desired performance metrics. A well-defined task ensures that the selected model aligns with specific requirements, optimizing effort and resources.
47
47
@@ -55,12 +55,6 @@ Model training can be guided by metrics. For example, BLEU-4 was used to evaluat
55
55
56
56
**Use intermediate checkpoints for better model selection**. Save checkpoints at regular intervals (e.g., every few epochs) and evaluate their performance. In some cases, an intermediate checkpoint may outperform the final model, allowing you to select the best version rather than relying solely on the last trained iteration.
57
57
58
-
## Deployment and monitoring
59
-
60
-
- Choose a suitable deployment infrastructure, such as cloud-based platforms or on-premises servers.
61
-
- Continuously monitor the model's performance and make necessary adjustments to ensure optimal performance.
62
-
- Consider regional deployment needs and latency requirements to meet enterprise SLAs. Implement security guardrails, such as private links, encryption, and access controls, to protect sensitive data and maintain compliance with organizational policies.
63
-
64
58
## Supported models for fine-tuning
65
59
66
60
Now that you know when to use fine-tuning for your use case, you can go to Azure AI Foundry to find models available to fine-tune.
@@ -72,7 +66,6 @@ For more information on fine-tuning using a managed compute (preview), see [Fine
72
66
73
67
For details about Azure OpenAI in Azure AI Foundry Models that are available for fine-tuning, see the [Azure OpenAI in Foundry Models documentation.](../../ai-services/openai/concepts/models.md#fine-tuning-models)
74
68
75
-
76
69
## Best practices for fine-tuning
77
70
78
71
Here are some best practices that can help improve the efficiency and effectiveness of fine-tuning LLMs for various applications:
0 commit comments