Replies: 1 comment
-
I'm not sure if this is exactly the same as a LoRA model, as AdaLoRA does a few other things differently. But I would suggest to train LoRA a model with the same settings as you just tried, I would expect it to perform equally well. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
The documentation states that "AdaLoRA has an
update_and_allocate()
method that should be called at each training step to update the parameter budget and mask, otherwise the adaptation step is not performed." I fine-tuned a model using AdaLoRA and Trainer without performing theupdate_and_allocate()
step, but the resulting model is still quite useful. Now I'm wondering, what should I call this since it is not a proper AdaLoRA-trained model?Is it just LoRA that targets all linear layers and uses
init_r
as itsr
? or is it something else?Beta Was this translation helpful? Give feedback.
All reactions