Fail to merge for a custom model. #2581
Unanswered
Boltzmachine
asked this question in
Q&A
Replies: 2 comments 1 reply
-
Could you please provide a bit more code so that we can better understand what's going on here? No need for the full training code, but how you initialize the model before training, how you save it, how you load it, and how you try to merge it. |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have a custom model. I use peft's lora to train it and it is automatically saved by huggingface's trainer. When I load it by
AutoModel.from_pretrained
, I found the type of the model is the custom model (not any of the peft classes), yet the linear layer is replaced bylora.linear
. However, using such model is slow when inference so I want to the lora weights. I tried following methodsmodel.merge_and_unload()
. Failed because the model is not a peft model when I load it. It is the type of my custom model instead so it does not have such a method.AutoPeftModelForCausalLM.from_pretrained(path)
. Failed with error message:model = AutoModelForCausalLM(original_path_of_the_base_model); model.load_adapter(...)
. Failed, it is the same as 1.Beta Was this translation helpful? Give feedback.
All reactions