Skip to content

Commit 98c0284

Browse files
committed
updated readme for new finetune config
1 parent 480c4f2 commit 98c0284

File tree

3 files changed

+8
-3
lines changed

3 files changed

+8
-3
lines changed

docs/multi_gpu.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -138,8 +138,9 @@ It lets us specify the training settings for everything from `model_name` to `da
138138
mixed_precision: bool=True
139139
val_batch_size: int=1
140140
dataset = "samsum_dataset"
141-
peft_method: str = "lora" # None,llama_adapter, prefix
141+
peft_method: str = "lora" # None, llama_adapter (Caution: llama_adapter is currently not supported with FSDP)
142142
use_peft: bool=False
143+
from_peft_checkpoint: str="" # if not empty and use_peft=True, will load the peft checkpoint and resume the fine-tuning on that checkpoint
143144
output_dir: str = "PATH/to/save/PEFT/model"
144145
freeze_layers: bool = False
145146
num_freeze_layers: int = 1

docs/single_gpu.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,8 +94,9 @@ It let us specify the training settings, everything from `model_name` to `datase
9494
mixed_precision: bool=True
9595
val_batch_size: int=1
9696
dataset = "samsum_dataset"
97-
peft_method: str = "lora" # None,llama_adapter, prefix
97+
peft_method: str = "lora" # None, llama_adapter (Caution: llama_adapter is currently not supported with FSDP)
9898
use_peft: bool=False
99+
from_peft_checkpoint: str="" # if not empty and use_peft=True, will load the peft checkpoint and resume the fine-tuning on that checkpoint
99100
output_dir: str = "PATH/to/save/PEFT/model"
100101
freeze_layers: bool = False
101102
num_freeze_layers: int = 1
@@ -112,6 +113,7 @@ It let us specify the training settings, everything from `model_name` to `datase
112113
flop_counter_start: int = 3 # The step to start profiling, default is 3, which means after 3 steps of warmup stage, the profiler will start to count flops.
113114
use_profiler: bool = False # Enable pytorch profiler, can not be used with flop counter at the same time.
114115
profiler_dir: str = "PATH/to/save/profiler/results" # will be used if using profiler
116+
115117
```
116118

117119
* [Datasets config file](../src/llama_recipes/configs/datasets.py) provides the available options for datasets.

recipes/finetuning/README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,8 +48,9 @@ It lets us specify the training settings for everything from `model_name` to `da
4848
mixed_precision: bool=True
4949
val_batch_size: int=1
5050
dataset = "samsum_dataset"
51-
peft_method: str = "lora" # None,llama_adapter, prefix
51+
peft_method: str = "lora" # None, llama_adapter (Caution: llama_adapter is currently not supported with FSDP)
5252
use_peft: bool=False
53+
from_peft_checkpoint: str="" # if not empty and use_peft=True, will load the peft checkpoint and resume the fine-tuning on that checkpoint
5354
output_dir: str = "PATH/to/save/PEFT/model"
5455
freeze_layers: bool = False
5556
num_freeze_layers: int = 1
@@ -66,6 +67,7 @@ It lets us specify the training settings for everything from `model_name` to `da
6667
flop_counter_start: int = 3 # The step to start profiling, default is 3, which means after 3 steps of warmup stage, the profiler will start to count flops.
6768
use_profiler: bool = False # Enable pytorch profiler, can not be used with flop counter at the same time.
6869
profiler_dir: str = "PATH/to/save/profiler/results" # will be used if using profiler
70+
6971
```
7072

7173
* [Datasets config file](../../src/llama_recipes/configs/datasets.py) provides the available options for datasets.

0 commit comments

Comments
 (0)