-
Notifications
You must be signed in to change notification settings - Fork 875
Open
Description
When I was reproducing the NLU task, why were the weights of the entire model saved after fine-tuning instead of only the LoRA weights.
I run the command :
export num_gpus=4
export CUBLAS_WORKSPACE_CONFIG=":16:8" # https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility
export PYTHONHASHSEED=0
export output_dir="./mnli"
python -m torch.distributed.launch --nproc_per_node=$num_gpus \
examples/text-classification/run_glue.py \
--model_name_or_path roberta-base \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 5e-4 \
--num_train_epochs 30 \
--output_dir $output_dir/model \
--overwrite_output_dir \
--logging_steps 10 \
--logging_dir $output_dir/log \
--evaluation_strategy epoch \
--save_strategy epoch \
--warmup_ratio 0.06 \
--apply_lora \
--lora_r 8 \
--lora_alpha 8 \
--seed 0 \
--weight_decay 0.1 \
I check the weights saved during training, i found that the whole model weights are saved instead of the lora weights.
How can l solve this issue?
Metadata
Metadata
Assignees
Labels
No labels
