Skip to content

Commit 7308bc1

Browse files
committed
add docsa.
1 parent 0b0f311 commit 7308bc1

File tree

1 file changed

+10
-0
lines changed
  • docs/source/en/api/pipelines

1 file changed

+10
-0
lines changed

docs/source/en/api/pipelines/wan.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,12 @@ The following Wan models are supported in Diffusers:
3737
- [Wan 2.1 VACE 1.3B](https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B-diffusers)
3838
- [Wan 2.1 VACE 14B](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B-diffusers)
3939

40+
Follow Wan 2.2 checkpoints are also supported:
41+
42+
- [Wan 2.2 T2V 14B](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B-Diffusers)
43+
- [Wan 2.2 I2V 14B](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B-Diffusers)
44+
- [Wan 2.2 TI2V 5B](https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B-Diffusers)
45+
4046
> [!TIP]
4147
> Click on the Wan2.1 models in the right sidebar for more examples of video generation.
4248
@@ -327,6 +333,10 @@ The general rule of thumb to keep in mind when preparing inputs for the VACE pip
327333

328334
- Try lower `shift` values (`2.0` to `5.0`) for lower resolution videos and higher `shift` values (`7.0` to `12.0`) for higher resolution images.
329335

336+
## Using LightX2V LoRAs
337+
338+
Wan 2.1 and 2.2 support using [LightX2V LoRAs](https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Lightx2v) to speed up inference. Using them on Wan 2.2 is slightly more involed. Refer to [this code snippet](https://github.com/huggingface/diffusers/pull/12040#issuecomment-3144185272) to learn more.
339+
330340
## WanPipeline
331341

332342
[[autodoc]] WanPipeline

0 commit comments

Comments
 (0)