-
Couldn't load subscription status.
- Fork 6.5k
Add Wan2.2 VACE - Fun #12324
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Wan2.2 VACE - Fun #12324
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @linoytsaban !
|
@bot /style |
|
Style bot fixed some files and pushed the changes. |
|
Could we check if the failing test is not being introduced in this PR? |
|
Very Awesome! |
|
@sayakpaul @yiyixuxu I think current failing test is not related |
|
Indeed. The failure I pointed out has now gone 👍 Thanks for the work, Linoy! |
|
@linoytsaban Does this support Masked V2V? |
|
@linoytsaban I noticed that using the lightx2v lora causes a lot of warnings about mismatching layers in console and also produces much worse results than yours. maybe it's the wrong lora link? |
|
Thank you for your hard work on this! I'm wondering if this model supports multi-GPU inference. The reason I ask is that I currently have 8 RTX 4090 graphics cards available, and using a single 4090 leads to an out-of-memory (OOM) error. |
|
@00Neil we don't yet support exotic forms of parallelism within the library. #11941 is in the works. We have some guidance on how to reduce memory consumption and other speedup-related things we support from the library:
|
|
@sayakpaul @linoytsaban MV2V was just commited upstream: |
https://huggingface.co/alibaba-pai/Wan2.2-VACE-Fun-A14B
diffusers format: https://huggingface.co/linoyts/Wan2.2-VACE-Fun-14B-diffusers
Example with Reference(s)-to-Video:
Notes:
boundary_ratiois set to 0.875 by default, I didn't experiment with the valuesto use with the fast inference LoRA:
output_video-6.mp4
output_video-8.mp4