Skip to content

Commit e56356b

Browse files
committed
fixed spelling error
1 parent eb2b1d1 commit e56356b

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/LLM_finetuning.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Here we discuss fine-tuning Meta Llama 3 with a couple of different recipes. We
44

55

66
## 1. **Parameter Efficient Model Fine-Tuning**
7-
This helps make the fine-tuning process more affordable even on 1 consumer grade GPU. These methods enable us to keep the whole model frozen and to just add tiny learnable parameters/ layers into the model. In this way, we just train a very tiny portion of the parameters. The most famous method in this category is [LORA](https://arxiv.org/pdf/2106.09685.pdf), LLaMA Adapter and Prefix-tuning.
7+
This helps make the fine-tuning process more affordable even on 1 consumer grade GPU. These methods enable us to keep the whole model frozen and to just add tiny learnable parameters/ layers into the model. In this way, we just train a very tiny portion of the parameters. The most famous method in this category is [LORA](https://arxiv.org/pdf/2106.09685.pdf), Llama Adapter and Prefix-tuning.
88

99

1010
These methods will address three aspects:
@@ -14,7 +14,7 @@ These methods will address three aspects:
1414

1515
- **Cost of deployment** – for each fine-tuned downstream model we need to deploy a separate model; however, when using these methods, only a small set of parameters (few MB instead of several GBs) of the pretrained model can do the job. In this case, for each task we only add these extra parameters on top of the pretrained model so pretrained models can be assumed as backbone and these parameters as heads for the model on different tasks.
1616

17-
- **Catastrophic forgetting** — these methods also help with forgetting the first task that can happen in fine-tunings.
17+
- **Catastrophic forgetting** — these methods also help with forgetting the first task that can happen in fine-tuning.
1818

1919
HF [PEFT](https://github.com/huggingface/peft) library provides an easy way of using these methods which we make use of here. Please read more [here](https://huggingface.co/blog/peft).
2020

0 commit comments

Comments
 (0)