You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/concepts/fine-tuning-overview.md
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,9 +35,11 @@ Fine-tuning is a great way to get higher quality results while reducing latency.
35
35
36
36
### Why do you want to fine-tune a model?
37
37
38
-
Fine-tuning can be useful if you have a specific use case for a pre-trained LLM. For example, if you have a generic pre-trained model but you would like to use the model for more specific topics. Before you begin Finetuning a model you can consider if you've identified shortcomings when using a base model. These shortcomings can include: an inconsistent performance on edge cases, inability to fit enough shot prompts in the context window to steer the model, or high latency.
38
+
Before you begin fine-tuning a model, consider if you've identified shortcomings when using a base model. These shortcomings can include: an inconsistent performance on edge cases, inability to fit enough prompts in the context window to steer the model, or high latency.
39
39
40
-
Before you begin fine-tuning a model you can consider if you've identified shortcomings when using a base model. These shortcomings can include: an inconsistent performance on edge cases, inability to fit enough prompts in the context window to steer the model, or high latency. Base models are already pre-trained on vast amounts of data, but most times you will add instructions and examples to the prompt to get the quality responses that you're looking for. This process of "few-shot learning" can be improved with fine-tuning. Fine-tuning allows you to train a model with many more examples. You can tailor your examples to meet your specific use-case. This can help you reduce the number of tokens in the prompt leading to potential cost savings and requests with lower latency.
40
+
Base models are already pre-trained on vast amounts of data, but most times you will add instructions and examples to the prompt to get the quality responses that you're looking for. This process of "few-shot learning" can be improved with fine-tuning.
41
+
42
+
Fine-tuning allows you to train a model with many more examples. You can tailor your examples to meet your specific use-case. This can help you reduce the number of tokens in the prompt leading to potential cost savings and requests with lower latency.
41
43
42
44
Use cases for fine-tuning a model can be:
43
45
- Steering the model to output content in a specific and customized style, tone, or format.
@@ -76,9 +78,8 @@ Another important point is that even with high-quality data, if your data isn't
76
78
You might be ready for fine-tuning if:
77
79
78
80
- You identified a dataset for fine-tuning.
79
-
- Your dataset is in the appropriate format for training on your existing LLM.
81
+
- Your dataset is in the appropriate format for training on your existing model.
80
82
- You employed some level of curation to ensure dataset quality.
81
-
- Your training data is in the same format that you want your LLM to output.
82
83
83
84
### How will you measure the quality of your fine-tuned model?
0 commit comments