Skip to content

Commit 3eb0ae6

Browse files
Wan2.2 examples
1 parent 9997687 commit 3eb0ae6

13 files changed

+2913
-1
lines changed

README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,9 @@ Here are some more advanced examples:
7474

7575
[Nvidia Cosmos Predict2](cosmos_predict2)
7676

77-
[Wan](wan)
77+
[Wan 2.1](wan)
78+
79+
[Wan 2.2](wan22)
7880

7981
[Audio Models](audio)
8082

chroma/fennec_girl_flowers.png

1.25 MB
Loading

chroma/fennec_girl_hug.png

1.09 MB
Loading

wan/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22

33
[Wan 2.1](https://github.com/Wan-Video/Wan2.1) is a family of video models.
44

5+
For Wan 2.2 see: [Wan 2.2](../wan22)
6+
57
## Files to Download
68

79
You will first need:

wan22/README.md

Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
# Wan 2.2 Models
2+
3+
[Wan 2.2](https://github.com/Wan-Video/Wan2.2) is a family of video models and the version after [Wan 2.1](../wan)
4+
5+
Wan2.2 is initially released with 3 different models, a 5B model that can do both text and image to video and two 14B models, one for text to video and the other for video to video.
6+
7+
See also the [Comfy Docs Wan 2.2 page for more workflow examples.](https://docs.comfy.org/tutorials/video/wan/wan2_2)
8+
9+
## Files to Download
10+
11+
You will first need:
12+
13+
#### Text encoder and VAE:
14+
15+
[umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/text_encoders) goes in: ComfyUI/models/text_encoders/
16+
17+
Needed for the 14B models: [wan_2.1_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/vae/wan_2.1_vae.safetensors) goes in: ComfyUI/models/vae/
18+
19+
Needed for the 5B model (NEW): [wan2.2_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/vae/wan2.2_vae.safetensors) goes in: ComfyUI/models/vae/
20+
21+
#### Video Models
22+
23+
The diffusion models can be found [here](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/diffusion_models)
24+
25+
These files go in: ComfyUI/models/diffusion_models/
26+
27+
## Workflows
28+
29+
### 5B Model
30+
31+
This workflow requires the [wan2.2_ti2v_5B_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/diffusion_models/wan2.2_ti2v_5B_fp16.safetensors) file (put it in: ComfyUI/models/diffusion_models/).
32+
33+
Make sure you have the [wan2.2 VAE](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/vae/wan2.2_vae.safetensors) (goes in: ComfyUI/models/vae/)
34+
35+
#### Text to video
36+
37+
![Example](text_to_video_wan22_5B.webp)
38+
39+
[Workflow in Json format](text_to_video_wan22_5B.json)
40+
41+
42+
#### Image to Video
43+
44+
![Example](image_to_video_wan22_5B.webp)
45+
46+
[Workflow in Json format](image_to_video_wan22_5B.json)
47+
48+
You can find the input image [here](../chroma/fennec_girl_hug.png)
49+
50+
### 14B Model
51+
52+
Make sure you have the [wan2.1 VAE](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/vae/wan_2.1_vae.safetensors) (goes in: ComfyUI/models/vae/)
53+
54+
#### Text to video
55+
56+
This workflow requires both the [wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/diffusion_models/wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors) and the [wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/diffusion_models/wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors) file (put it in: ComfyUI/models/diffusion_models/).
57+
58+
![Example](text_to_video_wan22_14B.webp)
59+
60+
[Workflow in Json format](text_to_video_wan22_14B.json)
61+
62+
#### Image to Video
63+
64+
This workflow requires both the [wan2.2_i2v_high_noise_14B_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/diffusion_models/wan2.2_i2v_high_noise_14B_fp8_scaled.safetensors) and the [wan2.2_i2v_low_noise_14B_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/diffusion_models/wan2.2_i2v_low_noise_14B_fp8_scaled.safetensors) file (put it in: ComfyUI/models/diffusion_models/).
65+
66+
![Example](image_to_video_wan22_14B.webp)
67+
68+
[Workflow in Json format](image_to_video_wan22_14B.json)
69+
70+
You can find the input image [here](../chroma/fennec_girl_flowers.png)

0 commit comments

Comments
 (0)