Skip to content

Commit e98d522

Browse files
authored
Update fine-tuning-overview.md
1 parent 29b4bb4 commit e98d522

File tree

1 file changed

+5
-4
lines changed

1 file changed

+5
-4
lines changed

articles/ai-studio/concepts/fine-tuning-overview.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -35,9 +35,11 @@ Fine-tuning is a great way to get higher quality results while reducing latency.
3535

3636
### Why do you want to fine-tune a model?
3737

38-
Fine-tuning can be useful if you have a specific use case for a pre-trained LLM. For example, if you have a generic pre-trained model but you would like to use the model for more specific topics. Before you begin Finetuning a model you can consider if you've identified shortcomings when using a base model. These shortcomings can include: an inconsistent performance on edge cases, inability to fit enough shot prompts in the context window to steer the model, or high latency.
38+
Before you begin fine-tuning a model, consider if you've identified shortcomings when using a base model. These shortcomings can include: an inconsistent performance on edge cases, inability to fit enough prompts in the context window to steer the model, or high latency.
3939

40-
Before you begin fine-tuning a model you can consider if you've identified shortcomings when using a base model. These shortcomings can include: an inconsistent performance on edge cases, inability to fit enough prompts in the context window to steer the model, or high latency. Base models are already pre-trained on vast amounts of data, but most times you will add instructions and examples to the prompt to get the quality responses that you're looking for. This process of "few-shot learning" can be improved with fine-tuning. Fine-tuning allows you to train a model with many more examples. You can tailor your examples to meet your specific use-case. This can help you reduce the number of tokens in the prompt leading to potential cost savings and requests with lower latency.
40+
Base models are already pre-trained on vast amounts of data, but most times you will add instructions and examples to the prompt to get the quality responses that you're looking for. This process of "few-shot learning" can be improved with fine-tuning.
41+
42+
Fine-tuning allows you to train a model with many more examples. You can tailor your examples to meet your specific use-case. This can help you reduce the number of tokens in the prompt leading to potential cost savings and requests with lower latency.
4143

4244
Use cases for fine-tuning a model can be:
4345
- Steering the model to output content in a specific and customized style, tone, or format.
@@ -76,9 +78,8 @@ Another important point is that even with high-quality data, if your data isn't
7678
You might be ready for fine-tuning if:
7779

7880
- You identified a dataset for fine-tuning.
79-
- Your dataset is in the appropriate format for training on your existing LLM.
81+
- Your dataset is in the appropriate format for training on your existing model.
8082
- You employed some level of curation to ensure dataset quality.
81-
- Your training data is in the same format that you want your LLM to output.
8283

8384
### How will you measure the quality of your fine-tuned model?
8485

0 commit comments

Comments
 (0)