Skip to content

Commit 03f1ca7

Browse files
committed
fixed some typo to pass spellcheck
1 parent d0b7a20 commit 03f1ca7

File tree

3 files changed

+11
-11
lines changed

3 files changed

+11
-11
lines changed

recipes/finetuning/README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -50,9 +50,9 @@ save_model: bool = False
5050
dist_checkpoint_root_folder: str="model_checkpoints"
5151
dist_checkpoint_folder: str="fine-tuned"
5252
save_optimizer: bool=False
53-
flop_counter: bool=False # Enable Flop counter to measure model throughput, can not be used with pytorch profiler at the same time.
54-
flop_counter_startpoint: int=3 # The step to start profiling, default is 3, which means after 3 steps of warmup stage, the profiler will start to count flops.
55-
use_profiler: bool=False # Enable pytorch profiler, can not be used with flop counter at the same time.
53+
flop_counter: bool=False # Enable FLOPS counter to measure model throughput, can not be used with pytorch profiler at the same time.
54+
flop_counter_start: int=3 # The step to start profiling, default is 3, which means after 3 steps of warm-up stage, the profiler will start to count FLOPS.
55+
use_profiler: bool=False # Enable pytorch profiler, can not be used with FLOPS counter at the same time.
5656
profiler_dir: str="PATH/to/save/profiler/results" # will be used if using profiler
5757
```
5858

@@ -94,8 +94,8 @@ You'll be able to access a dedicated project or run link on [wandb.ai](https://w
9494
<img src="../../docs/images/wandb_screenshot.png" alt="wandb screenshot" width="500" />
9595
</div>
9696

97-
## FLop Counting and Pytorch Profiling
97+
## FLOPS Counting and Pytorch Profiling
9898

99-
To help with benchmarking effort, we are adding the support for counting the flops during the fine-tuning process. You can achieve this by setting `--flop_counter` when launching your single/multi GPU fine-tuning. Use `--flop_counter_startpoint` to choose which step to count the flops. It is recommended to allow a warmup stage before using the flop counter.
99+
To help with benchmarking effort, we are adding the support for counting the FLOPS during the fine-tuning process. You can achieve this by setting `--flop_counter` when launching your single/multi GPU fine-tuning. Use `--flop_counter_start` to choose which step to count the FLOPS. It is recommended to allow a warm-up stage before using the FLOPS counter.
100100

101-
Similarly, you can set `--use_profiler` flag and pass a profiling output path using `--profiler_dir` to capture the profile traces of your model using [PyTorch profiler](https://pytorch.org/tutorials/intermediate/tensorboard_profiler_tutorial.html). This would be helpful for debugging purposes. However, the `--flop_counter` and `--use_profiler` can not be used in the same time to ensure the measurement accuarcy.
101+
Similarly, you can set `--use_profiler` flag and pass a profiling output path using `--profiler_dir` to capture the profile traces of your model using [PyTorch profiler](https://pytorch.org/tutorials/intermediate/tensorboard_profiler_tutorial.html). This would be helpful for debugging purposes. However, the `--flop_counter` and `--use_profiler` can not be used in the same time to ensure the measurement accuracy.

src/llama_recipes/configs/training.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ class train_config:
4242
use_fast_kernels: bool = False # Enable using SDPA from PyTroch Accelerated Transformers, make use Flash Attention and Xformer memory-efficient kernels
4343
use_wandb: bool = False # Enable wandb for experient tracking
4444
save_metrics: bool = False # saves training metrics to a json file for later plotting
45-
flop_counter: bool = False # Enable Flop counter to measure model throughput, can not be used with pytorch profiler at the same time.
46-
flop_counter_startpoint: int = 3 # The step to start profiling, default is 3, which means after 3 steps of warmup stage, the profiler will start to count flops.
45+
flop_counter: bool = False # Enable flop counter to measure model throughput, can not be used with pytorch profiler at the same time.
46+
flop_counter_start: int = 3 # The step to start profiling, default is 3, which means after 3 steps of warmup stage, the profiler will start to count flops.
4747
use_profiler: bool = False # Enable pytorch profiler, can not be used with flop counter at the same time.
4848
profiler_dir: str = "PATH/to/save/profiler/results" # will be used if using profiler

src/llama_recipes/utils/train_utils.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -59,8 +59,8 @@ def throughput_measure_context(cfg, local_rank=None):
5959
) as torch_profiler:
6060
yield torch_profiler
6161
elif use_flop_counter:
62-
if cfg.max_train_step > 0 and cfg.max_train_step < cfg.flop_counter_startpoint:
63-
raise ValueError(f"flop counter requires at least {cfg.flop_counter_startpoint} train steps, please increase the max_train_step, current max_train_step {cfg.max_train_step}")
62+
if cfg.max_train_step > 0 and cfg.max_train_step < cfg.flop_counter_start:
63+
raise ValueError(f"flop counter requires at least {cfg.flop_counter_start} train steps, please increase the max_train_step, current max_train_step {cfg.max_train_step}")
6464
with FlopMeasure(rank=local_rank) as flop_counter:
6565
yield flop_counter
6666
else:
@@ -136,7 +136,7 @@ def train(model, train_dataloader,eval_dataloader, tokenizer, optimizer, lr_sche
136136
if not train_config.enable_fsdp or local_rank==0:
137137
print("max training steps reached, stopping training, total train steps finished: ", total_train_steps-1)
138138
break
139-
if train_config.flop_counter and total_train_steps == train_config.flop_counter_startpoint:
139+
if train_config.flop_counter and total_train_steps == train_config.flop_counter_start:
140140
print("start flop counting at the step: ", total_train_steps)
141141
measure_context.start_counting()
142142
for key in batch.keys():

0 commit comments

Comments
 (0)