Got Error on Fine-tuning TrOCR with LoRA #1311
Unanswered
bustamiyusoef
asked this question in
Q&A
Replies: 1 comment
-
If your hardware allows it, could you check if you get the same error without PEFT? I.e. just doing a full fine-tune of the base model? You could try a lower batch size, SGD instead of Adam etc. to save on memory if necessary. The reason why I ask is because this looks like an issue with the dataset, not with PEFT. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'm Fine-tuning TrOCR with LoRA, here's how I do it
I got:
trainable params: 294912 || all params: 61891584 || trainable %: 0.4764977415992456
it seems LoRA successfully injected to the Model.
When I trained the model using this way:
I got error like this:
anyone can help?
Beta Was this translation helpful? Give feedback.
All reactions