diff --git a/docs.json b/docs.json index a2d390ba0..344adab96 100644 --- a/docs.json +++ b/docs.json @@ -157,6 +157,7 @@ "group": "Wan Video", "pages": [ "tutorials/video/wan/wan2_2", + "tutorials/video/wan/wan2-2-s2v", "tutorials/video/wan/wan2-2-fun-inp", "tutorials/video/wan/wan2-2-fun-control", "tutorials/video/wan/wan2-2-fun-camera", @@ -713,6 +714,7 @@ "group": "万相视频", "pages": [ "zh-CN/tutorials/video/wan/wan2_2", + "zh-CN/tutorials/video/wan/wan2-2-s2v", "zh-CN/tutorials/video/wan/wan2-2-fun-inp", "zh-CN/tutorials/video/wan/wan2-2-fun-control", "zh-CN/tutorials/video/wan/wan2-2-fun-camera", diff --git a/images/tutorial/video/wan/wan_2.2_14b_s2v.jpg b/images/tutorial/video/wan/wan_2.2_14b_s2v.jpg new file mode 100644 index 000000000..c1c5957ed Binary files /dev/null and b/images/tutorial/video/wan/wan_2.2_14b_s2v.jpg differ diff --git a/tutorials/video/wan/wan2-2-s2v.mdx b/tutorials/video/wan/wan2-2-s2v.mdx new file mode 100644 index 000000000..cdbc02c4e --- /dev/null +++ b/tutorials/video/wan/wan2-2-s2v.mdx @@ -0,0 +1,123 @@ +--- +title: Wan2.2-S2V Audio-Driven Video Generation ComfyUI Native Workflow Example +description: This is a native workflow example for Wan2.2-S2V audio-driven video generation in ComfyUI. +sidebarTitle: "Wan2.2 S2V" +--- + +import UpdateReminder from '/snippets/tutorials/update-reminder.mdx' + +We're excited to announce that Wan2.2-S2V, the advanced audio-driven video generation model, is now natively supported in ComfyUI! This powerful AI model can transform static images and audio inputs into dynamic video content, supporting dialogue, singing, performance, and various creative content needs. + +**Model Highlights** +- **Audio-Driven Video Generation**: Transforms static images and audio into synchronized videos +- **Cinematic-Grade Quality**: Generates film-quality videos with natural expressions and movements +- **Minute-Level Generation**: Supports long-form video creation +- **Multi-Format Support**: Works with full-body and half-body characters +- **Enhanced Motion Control**: Generates actions and environments from text instructions + +Wan2.2 S2V Code: [GitHub](https://github.com/aigc-apps/VideoX-Fun) +Wan2.2 S2V Model: [Hugging Face](https://huggingface.co/Wan-AI/Wan2.2-S2V-14B) + + +## Wan2.2 S2V ComfyUI Native Workflow + + + +### 1. Download Workflow File + +Download the following workflow file and drag it into ComfyUI to load the workflow. + + + + +

Download JSON Workflow

+
+ +Download the following image and audio as input: +![input](https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/video/wan/wan2.2_s2v/input.jpg) + + + +

Download Input Audio

+
+ +### 2. Model Links + +You can find the models in [our repo](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged) + +**diffusion_models** +- [wan2.2_s2v_14B_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_s2v_14B_fp8_scaled.safetensors) +- [wan2.2_s2v_14B_bf16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_s2v_14B_bf16.safetensors) + +**audio_encoders** +- [wav2vec2_large_english_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/audio_encoders/wav2vec2_large_english_fp16.safetensors) + +**vae** +- [wan_2.1_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors) + +**text_encoders** +- [umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors) + + +``` +ComfyUI/ +├───📂 models/ +│ ├───📂 diffusion_models/ +│ │ ├─── wan2.2_s2v_14B_fp8_scaled.safetensors +│ │ └─── wan2.2_s2v_14B_bf16.safetensors +│ ├───📂 text_encoders/ +│ │ └─── umt5_xxl_fp8_e4m3fn_scaled.safetensors +│ ├───📂 audio_encoders/ # Create one if you can't find this folder +│ │ └─── wav2vec2_large_english_fp16.safetensors +│ └───📂 vae/ +│ └── wan_2.1_vae.safetensors +``` + + +### 3. Workflow Instructions + +![Workflow Instructions](/images/tutorial/video/wan/wan_2.2_14b_s2v.jpg) + +#### 3.1 About Lightning LoRA + +#### 3.2 About fp8_scaled and bf16 Models + +You can find both models [here](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/diffusion_models): + +- [wan2.2_s2v_14B_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_s2v_14B_fp8_scaled.safetensors) +- [wan2.2_s2v_14B_bf16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_s2v_14B_bf16.safetensors) + +This template uses `wan2.2_s2v_14B_fp8_scaled.safetensors`, which requires less VRAM. But you can try `wan2.2_s2v_14B_bf16.safetensors` to reduce quality degradation. + +#### 3.3 Step-by-Step Operation Instructions + +**Step 1: Load Models** +1. **Load Diffusion Model**: Load `wan2.2_s2v_14B_fp8_scaled.safetensors` or `wan2.2_s2v_14B_bf16.safetensors` + - The provided workflow uses `wan2.2_s2v_14B_fp8_scaled.safetensors`, which requires less VRAM + - But you can try `wan2.2_s2v_14B_bf16.safetensors` to reduce quality degradation +2. **Load CLIP**: Load `umt5_xxl_fp8_e4m3fn_scaled.safetensors` +3. **Load VAE**: Load `wan_2.1_vae.safetensors` +4. **AudioEncoderLoader**: Load `wav2vec2_large_english_fp16.safetensors` +5. **LoraLoaderModelOnly**: Load `wan2.2_t2v_lightx2v_4steps_lora_v1.1_high_noise.safetensors` (Lightning LoRA) + - We tested all wan2.2 lightning LoRAs. Since this is not a LoRA specifically trained for Wan2.2 S2V, many key values don't match, but we added it because it significantly reduces generation time. We will continue to optimize this template + - Using it will cause significant dynamic and quality loss + - If you find the output quality too poor, you can try the original 20-step workflow +6. **LoadAudio**: Upload our provided audio file or your own audio +7. **Load Image**: Upload reference image +8. **Batch sizes**: Set according to the number of Video S2V Extend subgraph nodes you add + - Each Video S2V Extend subgraph adds 77 frames to the final output + - For example: If you added 2 Video S2V Extend subgraphs, the batch size should be 3, which means the total number of sampling iterations + - **Chunk Length**: Keep the default value of 77 + +9. **Sampler Settings**: Choose different settings based on whether you use Lightning LoRA + - With 4-step Lightning LoRA: steps: 4, cfg: 1.0 + - Without 4-step Lightning LoRA: steps: 20, cfg: 6.0 +10. **Size Settings**: Set the output video dimensions +11. **Video S2V Extend**: Video extension subgraph nodes. Since our default frames per sampling is 77, and this is a 16fps model, each extension will generate 77 / 16 = 4.8125 seconds of video + - You need some calculation to match the number of video extension subgraph nodes with the input audio length. For example: If input audio is 14s, the total frames needed are 14x16=224, each video extension is 77 frames, so you need 224/77 = 2.9, rounded up to 3 video extension subgraph nodes +12. Use Ctrl-Enter or click the Run button to execute the workflow + diff --git a/zh-CN/tutorials/video/wan/wan2-2-s2v.mdx b/zh-CN/tutorials/video/wan/wan2-2-s2v.mdx new file mode 100644 index 000000000..a45daf724 --- /dev/null +++ b/zh-CN/tutorials/video/wan/wan2-2-s2v.mdx @@ -0,0 +1,121 @@ +--- +title: Wan2.2-S2V 音频驱动视频生成 ComfyUI 原生工作流示例 +description: 这是一个基于 ComfyUI 的 Wan2.2-S2V 音频驱动视频生成原生工作流示例。 +sidebarTitle: "Wan2.2 S2V" +--- + +我们很高兴地宣布,先进的音频驱动视频生成模型 Wan2.2-S2V 现已原生支持 ComfyUI!这个强大的 AI 模型可以将静态图片和音频输入转化为动态视频内容,支持对话、唱歌、表演等多种创意内容需求。 + +**模型亮点** +- **音频驱动视频生成**:将静态图片和音频转化为同步视频 +- **电影级画质**:生成具有自然表情和动作的高质量视频 +- **分钟级生成**:支持长时长视频创作 +- **多格式支持**:适用于全身和半身角色 +- **增强动作控制**:可根据文本指令生成动作和环境 + +Wan2.2 S2V 代码仓库:[Github](https://github.com/aigc-apps/VideoX-Fun) +Wan2.2 S2V 模型仓库:[Hugging Face](https://huggingface.co/Wan-AI/Wan2.2-S2V-14B) + + +## Wan2.2 S2V ComfyUI 原生工作流 + + + +### 1. 工作流文件下载 + +下载以下工作流文件并拖入 ComfyUI 中加载工作流。 + + + + +

Download JSON Workflow

+
+ +下载下面的图片及音频作为输入: +![input](https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/video/wan/wan2.2_s2v/input.jpg) + + +

下载输入音频

+
+ +### 2. 模型链接 + +你可以在 [我们的仓库](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged) 中找到所有模型。 + +**diffusion_models** +- [wan2.2_s2v_14B_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_s2v_14B_fp8_scaled.safetensors) +- [wan2.2_s2v_14B_bf16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_s2v_14B_bf16.safetensors) + +**audio_encoders** +- [wav2vec2_large_english_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/audio_encoders/wav2vec2_large_english_fp16.safetensors) + +**vae** +- [wan_2.1_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors) + +**text_encoders** +- [umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors) + + +``` +ComfyUI/ +├───📂 models/ +│ ├───📂 diffusion_models/ +│ │ ├──── wan2.2_s2v_14B_fp8_scaled.safetensors +│ │ └─── wan2.2_s2v_14B_bf16.safetensors +│ ├───📂 text_encoders/ +│ │ └─── umt5_xxl_fp8_e4m3fn_scaled.safetensors +│ ├───📂 audio_encoders/ # 如果这个文件夹不存在请手动创建一个 +│ │ └─── wav2vec2_large_english_fp16.safetensors +│ └───📂 vae/ +│ └── wan_2.1_vae.safetensors +``` + + +### 3. 工作流说明 + +![工作流说明](/images/tutorial/video/wan/wan_2.2_14b_s2v.jpg) + +#### 3.1 关于 Lightning LoRA + + +#### 3.2 关于 fp8_scaled 和 bf16 模型 + +你可以在 [这里](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/diffusion_models) 找到两种模型: + +- [wan2.2_s2v_14B_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_s2v_14B_fp8_scaled.safetensors) +- [wan2.2_s2v_14B_bf16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_s2v_14B_bf16.safetensors) + +本模板使用 `wan2.2_s2v_14B_fp8_scaled.safetensors`,它需要更少的显存。但你可以尝试 `wan2.2_s2v_14B_bf16.safetensors` 来减少质量损失。 + +#### 3.3 逐步操作说明 + +**步骤 1:加载模型** +1. **Load Diffusion Model**:加载 `wan2.2_s2v_14B_fp8_scaled.safetensors` 或 `wan2.2_s2v_14B_bf16.safetensors` + - 提供工作流使用 `wan2.2_s2v_14B_fp8_scaled.safetensors`,它需要更少的显存 + - 但你可以尝试 `wan2.2_s2v_14B_bf16.safetensors` 来减少质量损失 +2. **Load CLIP**:加载 `umt5_xxl_fp8_e4m3fn_scaled.safetensors` +3. **Load VAE**:加载 `wan_2.1_vae.safetensors` +4. **AudioEncoderLoader**:加载 `wav2vec2_large_english_fp16.safetensors` +5. **LoraLoaderModelOnly**:加载 `wan2.2_t2v_lightx2v_4steps_lora_v1.1_high_noise.safetensors`(Lightning LoRA) + - 测试了所有 wan2.2 lightning LoRAs,由于这并不是一个专门为 Wan2.2 S2V 训练的 LoRA,很多键值不匹配,但由于它能大幅减少生成时间,后续将继续优化这个模板 + - 使用它会导致极大的动态和质量损失 + - 如果你发现输出质量太差,可以尝试原始的 20 步工作流 +6. **LoadAudio**:上传我们提供的音频文件,或者你自己的音频 +7. **Load Image**:上传参考图片 +8. **Batch sizes**:根据你添加的 Video S2V Extend 子图节点数量设置 + - 每个 Video S2V Extend 子图会为最终输出添加 77 帧 + - 例如:如果添加了 2 个 Video S2V Extend 子图,批处理大小应设为 3, 也就是这里应为所有总采样次数 + - **Chunk Length**:保持默认值 77 + +9. **采样器设置**: 根据是否使用 Lightning LoRA 选择不同设置 + - 使用 4 步 Lightning LoRA: steps: 4, cfg: 1.0 + - 不使用 4 步 Lightning LoRA: steps: 20, cfg: 6.0 +10. **尺寸设置**: 设置输出视频的尺寸 +11. **Video S2V Extend**:视频扩展子图节点,由于我们默认的每次采样帧数为 77, 由于这是一个 16fps 的模型,所以每个扩展将会生成 77 / 16 = 4.8125 秒的视频 + - 你需要一定的计算来使得视频扩展子图节点的数量和输入音频数量匹配,如: 输入音频为 14s, 则需要的总帧数为 14x16=224, 每个视频扩展为 77 帧,所以你需要 224/77 = 2.9 向上取整则为 3 个视频扩展子图节点 +12. 使用 Ctrl-Enter 或者点击 运行按钮来运行工作流 +