Full finetune?
#1284
Replies: 1 comment
-
You can't do full fine-tuning on a quantized model. You can use a non-quantized model for that (e.g. fp16, bf16, or fp32). |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi - I have had success fine tuning with lora and dora, but I tried full for fun and it blew up with the message:
RuntimeError: QuantizedMatmul::vjp no gradient wrt the quantized matrix yet.
Is this just because I don't have the memory on my machine.
Thanks !
Beta Was this translation helpful? Give feedback.
All reactions