how to quantization wan 2.2 vace after loading lora? #12953
Unanswered
chaowenguo
asked this question in
Q&A
Replies: 2 comments
-
|
I think you need to fuse the loras then save. Possibly you only need to save the two transformers as well |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
normally I can save the quantization model in this way
But now I want to merge lora and the quantization and then save the model with lora. How?
@yiyixuxu @DN6
Beta Was this translation helpful? Give feedback.
All reactions