8bit LORA量化微调失败,但FP16可以。无论是使用quantize方法还是AutoPeftModelForCausalLM.from_pretrained中指定load_in_8bit为True都失败。 #949
Replies: 1 comment 1 reply
-
peft版本没有测试8bit量化的lora微调了,只能fp16 bf16 |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Beta Was this translation helpful? Give feedback.
All reactions