Skip to content

Conversation

@HaohanTsao
Copy link
Collaborator

I followed the maintainer's suggestion to fix the sample in README and test it.

ref: https://github.com/huggingface/peft/pull/2851#discussion_r2470112246

@HaohanTsao
Copy link
Collaborator Author

This is the command I used for testing the issues mentioned from

the argument errors and quantinize not working

python examples/gralora_finetuning/gralora_finetuning.py \
--base_model hf-internal-testing/tiny-random-LlamaForCausalLM \
--output_dir ./test_output \
--quantize \
--num_epochs 1 \
--batch_size 1 \
--save_step 1000 \
--eval_step 5 \
--device cuda

The log part doesn't cause problem under my test. However, I noticed that no log output appeared until I installed tensorboard.
Also, after adding --quantize, I initially got an error because bitsandbytes wasn’t installed — installing it fixed the problem.

@yeonjoon-jung01 yeonjoon-jung01 merged commit a1c944a into gralora_support Oct 29, 2025
13 of 14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants