|
| 1 | +# Wan 2.2 Models |
| 2 | + |
| 3 | +[Wan 2.2](https://github.com/Wan-Video/Wan2.2) is a family of video models and the version after [Wan 2.1](../wan) |
| 4 | + |
| 5 | +Wan2.2 is initially released with 3 different models, a 5B model that can do both text and image to video and two 14B models, one for text to video and the other for video to video. |
| 6 | + |
| 7 | +See also the [Comfy Docs Wan 2.2 page for more workflow examples.](https://docs.comfy.org/tutorials/video/wan/wan2_2) |
| 8 | + |
| 9 | +## Files to Download |
| 10 | + |
| 11 | +You will first need: |
| 12 | + |
| 13 | +#### Text encoder and VAE: |
| 14 | + |
| 15 | +[umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/text_encoders) goes in: ComfyUI/models/text_encoders/ |
| 16 | + |
| 17 | +Needed for the 14B models: [wan_2.1_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/vae/wan_2.1_vae.safetensors) goes in: ComfyUI/models/vae/ |
| 18 | + |
| 19 | +Needed for the 5B model (NEW): [wan2.2_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/vae/wan2.2_vae.safetensors) goes in: ComfyUI/models/vae/ |
| 20 | + |
| 21 | +#### Video Models |
| 22 | + |
| 23 | +The diffusion models can be found [here](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/diffusion_models) |
| 24 | + |
| 25 | +These files go in: ComfyUI/models/diffusion_models/ |
| 26 | + |
| 27 | +## Workflows |
| 28 | + |
| 29 | +### 5B Model |
| 30 | + |
| 31 | +This workflow requires the [wan2.2_ti2v_5B_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/diffusion_models/wan2.2_ti2v_5B_fp16.safetensors) file (put it in: ComfyUI/models/diffusion_models/). |
| 32 | + |
| 33 | +Make sure you have the [wan2.2 VAE](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/vae/wan2.2_vae.safetensors) (goes in: ComfyUI/models/vae/) |
| 34 | + |
| 35 | +#### Text to video |
| 36 | + |
| 37 | + |
| 38 | + |
| 39 | +[Workflow in Json format](text_to_video_wan22_5B.json) |
| 40 | + |
| 41 | + |
| 42 | +#### Image to Video |
| 43 | + |
| 44 | + |
| 45 | + |
| 46 | +[Workflow in Json format](image_to_video_wan22_5B.json) |
| 47 | + |
| 48 | +You can find the input image [here](../chroma/fennec_girl_hug.png) |
| 49 | + |
| 50 | +### 14B Model |
| 51 | + |
| 52 | +Make sure you have the [wan2.1 VAE](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/vae/wan_2.1_vae.safetensors) (goes in: ComfyUI/models/vae/) |
| 53 | + |
| 54 | +#### Text to video |
| 55 | + |
| 56 | +This workflow requires both the [wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/diffusion_models/wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors) and the [wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/diffusion_models/wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors) file (put it in: ComfyUI/models/diffusion_models/). |
| 57 | + |
| 58 | + |
| 59 | + |
| 60 | +[Workflow in Json format](text_to_video_wan22_14B.json) |
| 61 | + |
| 62 | +#### Image to Video |
| 63 | + |
| 64 | +This workflow requires both the [wan2.2_i2v_high_noise_14B_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/diffusion_models/wan2.2_i2v_high_noise_14B_fp8_scaled.safetensors) and the [wan2.2_i2v_low_noise_14B_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/blob/main/split_files/diffusion_models/wan2.2_i2v_low_noise_14B_fp8_scaled.safetensors) file (put it in: ComfyUI/models/diffusion_models/). |
| 65 | + |
| 66 | + |
| 67 | + |
| 68 | +[Workflow in Json format](image_to_video_wan22_14B.json) |
| 69 | + |
| 70 | +You can find the input image [here](../chroma/fennec_girl_flowers.png) |
0 commit comments