You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: recipes/finetuning/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -70,7 +70,7 @@ It lets us specify the training settings for everything from `model_name` to `da
70
70
71
71
*[Datasets config file](../../src/llama_recipes/configs/datasets.py) provides the available options for datasets.
72
72
73
-
*[peft config file](../../src/llama_recipes/configs/peft.py) provides the supported PEFT methods and respective settings that can be modified.
73
+
*[peft config file](../../src/llama_recipes/configs/peft.py) provides the supported PEFT methods and respective settings that can be modified. We currently support LoRA and LLaMA-Adapter. Please note that LoRA is the only technique which is supported in combination with FSDP.
74
74
75
75
*[FSDP config file](../../src/llama_recipes/configs/fsdp.py) provides FSDP settings such as:
asserttrain_config.peft_methodinnames, f"Peft config not found: {train_config.peft_method}"
48
+
assert (
49
+
train_config.peft_methodinnames
50
+
), f"Peft config not found: {train_config.peft_method}"
51
+
52
+
assert (
53
+
train_config.peft_method!="prefix"
54
+
), "PrefixTuning is currently not supported (see https://github.com/meta-llama/llama-recipes/issues/359#issuecomment-2089350811)"
55
+
iftrain_config.enable_fsdp:
56
+
assert (
57
+
train_config.peft_method!="llama_adapter"
58
+
), "Llama_adapter is currently not supported in combination with FSDP (see https://github.com/meta-llama/llama-recipes/issues/359#issuecomment-2089274425)"
0 commit comments