Skip to content

Commit d0b7a20

Browse files
committed
finetuning readme updated
1 parent 7b1a941 commit d0b7a20

File tree

1 file changed

+19
-8
lines changed

1 file changed

+19
-8
lines changed

recipes/finetuning/README.md

Lines changed: 19 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
# Finetuning Llama
22

3-
This folder contains instructions to fine-tune Llama 2 on a
3+
This folder contains instructions to fine-tune Llama 2 on a
44
* [single-GPU setup](./singlegpu_finetuning.md)
5-
* [multi-GPU setup](./multigpu_finetuning.md)
5+
* [multi-GPU setup](./multigpu_finetuning.md)
66

77
using the canonical [finetuning script](../../src/llama_recipes/finetuning.py) in the llama-recipes package.
88

@@ -25,30 +25,35 @@ It lets us specify the training settings for everything from `model_name` to `da
2525
```python
2626

2727
model_name: str="PATH/to/LLAMA 2/7B"
28-
enable_fsdp: bool= False
28+
enable_fsdp: bool=False
2929
run_validation: bool=True
3030
batch_size_training: int=4
3131
gradient_accumulation_steps: int=1
32+
max_train_step: int=0
33+
max_eval_step: int=0
3234
num_epochs: int=3
3335
num_workers_dataloader: int=2
3436
lr: float=2e-4
3537
weight_decay: float=0.0
36-
gamma: float= 0.85
38+
gamma: float=0.85
3739
use_fp16: bool=False
3840
mixed_precision: bool=True
3941
val_batch_size: int=4
4042
dataset = "samsum_dataset" # alpaca_dataset, grammar_dataset
41-
peft_method: str = "lora" # None , llama_adapter, prefix
43+
peft_method: str="lora" # None , llama_adapter, prefix
4244
use_peft: bool=False
43-
output_dir: str = "./ft-output"
45+
output_dir: str="./ft-output"
4446
freeze_layers: bool = False
4547
num_freeze_layers: int = 1
4648
quantization: bool = False
4749
save_model: bool = False
4850
dist_checkpoint_root_folder: str="model_checkpoints"
4951
dist_checkpoint_folder: str="fine-tuned"
5052
save_optimizer: bool=False
51-
53+
flop_counter: bool=False # Enable Flop counter to measure model throughput, can not be used with pytorch profiler at the same time.
54+
flop_counter_startpoint: int=3 # The step to start profiling, default is 3, which means after 3 steps of warmup stage, the profiler will start to count flops.
55+
use_profiler: bool=False # Enable pytorch profiler, can not be used with flop counter at the same time.
56+
profiler_dir: str="PATH/to/save/profiler/results" # will be used if using profiler
5257
```
5358

5459
* [Datasets config file](../../src/llama_recipes/configs/datasets.py) provides the available options for datasets.
@@ -84,7 +89,13 @@ You can enable [W&B](https://wandb.ai/) experiment tracking by using `use_wandb`
8489
```bash
8590
python -m llama_recipes.finetuning --use_peft --peft_method lora --quantization --model_name /patht_of_model_folder/7B --output_dir Path/to/save/PEFT/model --use_wandb
8691
```
87-
You'll be able to access a dedicated project or run link on [wandb.ai](https://wandb.ai) and see your dashboard like the one below.
92+
You'll be able to access a dedicated project or run link on [wandb.ai](https://wandb.ai) and see your dashboard like the one below.
8893
<div style="display: flex;">
8994
<img src="../../docs/images/wandb_screenshot.png" alt="wandb screenshot" width="500" />
9095
</div>
96+
97+
## FLop Counting and Pytorch Profiling
98+
99+
To help with benchmarking effort, we are adding the support for counting the flops during the fine-tuning process. You can achieve this by setting `--flop_counter` when launching your single/multi GPU fine-tuning. Use `--flop_counter_startpoint` to choose which step to count the flops. It is recommended to allow a warmup stage before using the flop counter.
100+
101+
Similarly, you can set `--use_profiler` flag and pass a profiling output path using `--profiler_dir` to capture the profile traces of your model using [PyTorch profiler](https://pytorch.org/tutorials/intermediate/tensorboard_profiler_tutorial.html). This would be helpful for debugging purposes. However, the `--flop_counter` and `--use_profiler` can not be used in the same time to ensure the measurement accuarcy.

0 commit comments

Comments
 (0)