Skip to content

Commit 2a37c91

Browse files
authored
Update docs (#355)
1 parent 9a5a7e8 commit 2a37c91

File tree

7 files changed

+312
-45
lines changed

7 files changed

+312
-45
lines changed

docs.json

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -151,6 +151,7 @@
151151
"group": "Wan Video",
152152
"pages": [
153153
"tutorials/video/wan/wan2_2",
154+
"tutorials/video/wan/wan2-2-fun-inp",
154155
{
155156
"group": "Wan2.1",
156157
"pages": [
@@ -698,6 +699,7 @@
698699
"group": "万相视频",
699700
"pages": [
700701
"zh-CN/tutorials/video/wan/wan2_2",
702+
"zh-CN/tutorials/video/wan/wan2-2-fun-inp",
701703
{
702704
"group": "Wan2.1",
703705
"pages": [
89.4 KB
Loading
951 KB
Loading

tutorials/image/qwen/qwen-image.mdx

Lines changed: 43 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -22,17 +22,21 @@ import UpdateReminder from '/snippets/tutorials/update-reminder.mdx'
2222

2323
<UpdateReminder />
2424

25-
**VRAM usage reference**
2625

27-
Tested with **RTX 4090D 24GB**
2826

29-
Model Version: Qwen-Image_fp8
30-
- VRAM: 86%
31-
- Generation time: 94s for the first time, 71s for the second time
27+
There are three different models used in the workflow attached to this document:
28+
1. Qwen-Image original model fp8_e4m3fn
29+
2. 8-step accelerated version: Qwen-Image original model fp8_e4m3fn with lightx2v 8-step LoRA
30+
3. Distilled version: Qwen-Image distilled model fp8_e4m3fn
3231

33-
**Model Version: Qwen-Image_bf16**
34-
- VRAM: 96%
35-
- Generation time: 295s for the first time, 131s for the second time
32+
**VRAM Usage Reference**
33+
GPU: RTX4090D 24GB
34+
35+
| Model Used | VRAM Usage | First Generation | Second Generation |
36+
| --------------------------------------- | ---------- | --------------- | ---------------- |
37+
| fp8_e4m3fn | 86% | ≈ 94s | ≈ 71s |
38+
| fp8_e4m3fn with lightx2v 8-step LoRA | 86% | ≈ 55s | ≈ 34s |
39+
| Distilled fp8_e4m3fn | 86% | ≈ 69s | ≈ 36s |
3640

3741

3842
### 1. Workflow File
@@ -59,23 +63,27 @@ Distilled version
5963

6064
All models are available at [Huggingface](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main) and [Modelscope](https://modelscope.cn/models/Comfy-Org/Qwen-Image_ComfyUI/files)
6165

62-
**Diffusion Model**
66+
**Diffusion model**
67+
68+
- [qwen_image_fp8_e4m3fn.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/diffusion_models/qwen_image_fp8_e4m3fn.safetensors)
6369

64-
[qwen_image_fp8_e4m3fn.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/diffusion_models/qwen_image_fp8_e4m3fn.safetensors)
70+
Qwen_image_distill
6571

66-
The following models are unofficial distilled versions that require only 15 steps.
67-
[Distilled Versions](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/non_official/diffusion_models)
68-
- [qwen_image_distill_full_bf16.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/non_official/diffusion_models/qwen_image_distill_full_bf16.safetensors) 40.9 GB
69-
- [qwen_image_distill_full_fp8.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/non_official/diffusion_models/qwen_image_distill_full_fp8_e4m3fn.safetensors) 20.4 GB
72+
- [qwen_image_distill_full_fp8_e4m3fn.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/non_official/diffusion_models/qwen_image_distill_full_fp8_e4m3fn.safetensors)
73+
- [qwen_image_distill_full_bf16.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/non_official/diffusion_models/qwen_image_distill_full_bf16.safetensors)
7074

7175
<Note>
72-
- The original author of the distilled version recommends using 15 steps with cfg 1.0.
73-
- According to tests, this distilled version also performs well at 10 steps with cfg 1.0. You can choose euler or res_multistep according to your desired image type.
76+
- The original author of the distilled version recommends using 15 steps with cfg 1.0.
77+
- According to tests, this distilled version also performs well at 10 steps with cfg 1.0. You can choose either euler or res_multistep based on the type of image you want.
7478
</Note>
7579

76-
**Text Encoder**
80+
**LoRA**
81+
82+
- [Qwen-Image-Lightning-8steps-V1.0.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Lightning-8steps-V1.0.safetensors)
83+
84+
**Text encoder**
7785

78-
[qwen_2.5_vl_7b_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/text_encoders/qwen_2.5_vl_7b_fp8_scaled.safetensors)
86+
- [qwen_2.5_vl_7b_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/text_encoders/qwen_2.5_vl_7b_fp8_scaled.safetensors)
7987

8088
**VAE**
8189

@@ -87,19 +95,29 @@ The following models are unofficial distilled versions that require only 15 step
8795
📂 ComfyUI/
8896
├── 📂 models/
8997
│ ├── 📂 diffusion_models/
90-
│ │ └── qwen_image_fp8_e4m3fn.safetensors
98+
│ │ ├── qwen_image_fp8_e4m3fn.safetensors
99+
│ │ └── qwen_image_distill_full_fp8_e4m3fn.safetensors ## 蒸馏版
100+
│ ├── 📂 loras/
101+
│ │ └── Qwen-Image-Lightning-8steps-V1.0.safetensors ## 8步加速 LoRA 模型
91102
│ ├── 📂 vae/
92103
│ │ └── qwen_image_vae.safetensors
93104
│ └── 📂 text_encoders/
94105
│ └── qwen_2.5_vl_7b_fp8_scaled.safetensors
95106
```
107+
96108
### 3. Complete the Workflow Step by Step
97109

98110
![Step Guide](/images/tutorial/image/qwen/image_qwen_image-guide.jpg)
99111

100-
1. Load `qwen_image_fp8_e4m3fn.safetensors` in the `Load Diffusion Model` node
101-
2. Load `qwen_2.5_vl_7b_fp8_scaled.safetensors` in the `Load CLIP` node
102-
3. Load `qwen_image_vae.safetensors` in the `Load VAE` node
103-
4. Set image dimensions in the `EmptySD3LatentImage` node
104-
5. Enter your prompts in the `CLIP Text Encoder` (supports English, Chinese, Korean, Japanese, Italian, etc.)
105-
6. Click Queue or press `Ctrl+Enter` to run
112+
1. Make sure the `Load Diffusion Model` node has loaded `qwen_image_fp8_e4m3fn.safetensors`
113+
2. Make sure the `Load CLIP` node has loaded `qwen_2.5_vl_7b_fp8_scaled.safetensors`
114+
3. Make sure the `Load VAE` node has loaded `qwen_image_vae.safetensors`
115+
4. Make sure the `EmptySD3LatentImage` node is set with the correct image dimensions
116+
5. Set your prompt in the `CLIP Text Encoder` node; currently, it supports at least English, Chinese, Korean, Japanese, Italian, etc.
117+
6. If you want to enable the 8-step acceleration LoRA by lightx2v, select the node and use `Ctrl + B` to enable it, and modify the Ksampler settings as described in step 8
118+
7. Click the `Queue` button, or use the shortcut `Ctrl(cmd) + Enter` to run the workflow
119+
8. For different model versions and workflows, adjust the KSampler parameters accordingly
120+
121+
<Note>
122+
The distilled model and the 8-step acceleration LoRA by lightx2v do not seem to be compatible for simultaneous use. You can experiment with different combinations to verify if they can be used together.
123+
</Note>
Lines changed: 114 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,114 @@
1+
---
2+
title: "ComfyUI Wan2.2 Fun Inp Start-End Frame Video Generation Example"
3+
description: "This article introduces how to use ComfyUI to complete the Wan2.2 Fun Inp start-end frame video generation example"
4+
sidebarTitle: "Wan2.2 Fun Inp"
5+
---
6+
7+
import UpdateReminder from '/snippets/tutorials/update-reminder.mdx'
8+
9+
**Wan2.2-Fun-Inp** is a start-end frame controlled video generation model launched by Alibaba PAI team. It supports inputting **start and end frame images** to generate intermediate transition videos, providing creators with greater creative control. The model is released under the **Apache 2.0 license** and supports commercial use.
10+
11+
**Key Features**:
12+
- **Start-End Frame Control**: Supports inputting start and end frame images to generate intermediate transition videos, enhancing video coherence and creative freedom
13+
- **High-Quality Video Generation**: Based on the Wan2.2 architecture, outputs film-level quality videos
14+
- **Multi-Resolution Support**: Supports generating videos at 512×512, 768×768, 1024×1024 and other resolutions to suit different scenarios
15+
16+
**Model Version**:
17+
- **14B High-Performance Version**: Model size exceeds 32GB, with better results but requires high VRAM
18+
19+
Below are the relevant model weights and code repositories:
20+
21+
- [🤗Wan2.2-Fun-Inp-14B](https://huggingface.co/alibaba-pai/Wan2.2-Fun-A14B-InP)
22+
- Code repository: [VideoX-Fun](https://github.com/aigc-apps/VideoX-Fun)
23+
24+
<UpdateReminder/>
25+
26+
## Wan2.2 Fun Inp Start-End Frame Video Generation Workflow Example
27+
28+
This workflow provides two versions:
29+
1. A version using [Wan2.2-Lightning](https://huggingface.co/lightx2v/Wan2.2-Lightning) 4-step LoRA from lightx2v for accelerated video generation
30+
2. A fp8_scaled version without acceleration LoRA
31+
32+
Below are the test results using an RTX4090D 24GB VRAM GPU
33+
34+
| Model Type | Resolution | VRAM Usage | First Generation Time | Second Generation Time |
35+
| ------------------------ | ---------- | ---------- | -------------------- | --------------------- |
36+
| fp8_scaled | 640×640 | 83% | ≈ 524s | ≈ 520s |
37+
| fp8_scaled + 4-step LoRA | 640×640 | 89% | ≈ 138s | ≈ 79s |
38+
39+
Since the acceleration with LoRA is significant, the provided workflows enable the accelerated LoRA version by default. If you want to enable the other workflow, select it and use **Ctrl+B** to activate.
40+
41+
### 1. Download Workflow File
42+
43+
Please update your ComfyUI to the latest version, and find "**Wan2.2 Fun Inp**" under the menu `Workflow` -> `Browse Templates` -> `Video` to load the workflow.
44+
45+
Or, after updating ComfyUI to the latest version, download the workflow below and drag it into ComfyUI to load.
46+
47+
<video
48+
controls
49+
className="w-full aspect-video"
50+
src="https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/video/wan/wan2.2_fun_inp/wan2.2_14B_fun_inp.mp4"
51+
></video>
52+
53+
<a className="prose" target='_blank' href="https://raw.githubusercontent.com/Comfy-Org/workflow_templates/refs/heads/main/templates/video_wan2_2_14B_fun_inpaint.json" style={{ display: 'inline-block', backgroundColor: '#0078D6', color: '#ffffff', padding: '10px 20px', borderRadius: '8px', borderColor: "transparent", textDecoration: 'none', fontWeight: 'bold'}}>
54+
<p className="prose" style={{ margin: 0, fontSize: "0.8rem" }}>Download JSON Workflow</p>
55+
</a>
56+
57+
Use the following materials as the start and end frames
58+
59+
![Wan2.2 Fun Control ComfyUI Workflow Start Frame Material](https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/video/wan/wan2.2_fun_inp/start_image.png)
60+
![Wan2.2 Fun Control ComfyUI Workflow End Frame Material](https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/video/wan/wan2.2_fun_inp/end_image.png)
61+
62+
### 2. Manually Download Models
63+
64+
**Diffusion Model**
65+
- [wan2.2_fun_inpaint_high_noise_14B_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_fun_inpaint_high_noise_14B_fp8_scaled.safetensors)
66+
- [wan2.2_fun_inpaint_low_noise_14B_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_fun_inpaint_low_noise_14B_fp8_scaled.safetensors)
67+
68+
**Lightning LoRA (Optional, for acceleration)**
69+
- [wan2.2_i2v_lightx2v_4steps_lora_v1_high_noise.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/loras/wan2.2_i2v_lightx2v_4steps_lora_v1_high_noise.safetensors)
70+
- [wan2.2_i2v_lightx2v_4steps_lora_v1_low_noise.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/loras/wan2.2_i2v_lightx2v_4steps_lora_v1_low_noise.safetensors)
71+
72+
**VAE**
73+
- [wan_2.1_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors)
74+
75+
**Text Encoder**
76+
- [umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors)
77+
78+
```
79+
ComfyUI/
80+
├───📂 models/
81+
│ ├───📂 diffusion_models/
82+
│ │ ├─── wan2.2_fun_inpaint_high_noise_14B_fp8_scaled.safetensors
83+
│ │ └─── wan2.2_fun_inpaint_low_noise_14B_fp8_scaled.safetensors
84+
│ ├───📂 loras/
85+
│ │ ├─── wan2.2_i2v_lightx2v_4steps_lora_v1_high_noise.safetensors
86+
│ │ └─── wan2.2_i2v_lightx2v_4steps_lora_v1_low_noise.safetensors
87+
│ ├───📂 text_encoders/
88+
│ │ └─── umt5_xxl_fp8_e4m3fn_scaled.safetensors
89+
│ └───📂 vae/
90+
│ └── wan_2.1_vae.safetensors
91+
```
92+
93+
### 3. Step-by-Step Workflow Guide
94+
95+
![Workflow Step Image](/images/tutorial/video/wan/wan2_2/wan_2.2_14b_fun_inp.jpg)
96+
97+
<Note>
98+
This workflow uses LoRA. Please make sure the corresponding Diffusion model and LoRA are matched.
99+
</Note>
100+
101+
1. **High noise** model and **LoRA** loading
102+
- Ensure the `Load Diffusion Model` node loads the `wan2.2_fun_inpaint_high_noise_14B_fp8_scaled.safetensors` model
103+
- Ensure the `LoraLoaderModelOnly` node loads the `wan2.2_i2v_lightx2v_4steps_lora_v1_high_noise.safetensors`
104+
2. **Low noise** model and **LoRA** loading
105+
- Ensure the `Load Diffusion Model` node loads the `wan2.2_fun_inpaint_low_noise_14B_fp8_scaled.safetensors` model
106+
- Ensure the `LoraLoaderModelOnly` node loads the `wan2.2_i2v_lightx2v_4steps_lora_v1_low_noise.safetensors`
107+
3. Ensure the `Load CLIP` node loads the `umt5_xxl_fp8_e4m3fn_scaled.safetensors` model
108+
4. Ensure the `Load VAE` node loads the `wan_2.1_vae.safetensors` model
109+
5. Upload the start and end frame images as materials
110+
6. Enter your prompt in the Prompt group
111+
7. Adjust the size and video length in the `WanFunInpaintToVideo` node
112+
- Adjust the `width` and `height` parameters. The default is `640`. We set a smaller size, but you can modify it as needed.
113+
- Adjust the `length`, which is the total number of frames. The current workflow fps is 16. For example, if you want to generate a 5-second video, you should set it to 5*16 = 80.
114+
8. Click the `Run` button, or use the shortcut `Ctrl(cmd) + Enter` to execute video generation

zh-CN/tutorials/image/qwen/qwen-image.mdx

Lines changed: 39 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -22,16 +22,20 @@ import UpdateReminder from '/snippets/zh/tutorials/update-reminder.mdx'
2222

2323
<UpdateReminder />
2424

25-
**显存使用参考**
26-
使用 **RTX 4090D 24GB** 测试
2725

28-
**模型版本: Qwen-Image_fp8**
29-
- VRAM: 86%
30-
- 生成时间: 首次 94 秒,第二次 71 秒
26+
在本篇文档所附工作流中使用的不同模型有三种
27+
1. Qwen-Image 原版模型 fp8_e4m3fn
28+
2. 8步加速版: Qwen-Image 原版模型 fp8_e4m3fn 使用 lightx2v 8步 LoRA,
29+
3. 蒸馏版:Qwen-Image 蒸馏版模型 fp8_e4m3fn
30+
31+
**显存使用参考**
32+
GPU: RTX4090D 24GB
3133

32-
**模型版本: Qwen-Image_bf16**
33-
- VRAM: 96%
34-
- 生成时间: 首次 295 秒,第二次 131 秒
34+
| 使用模型 | VRAM Usage | 首次生成 | 第二次生成 |
35+
| --------------------------------- | ---------- | -------- | ---------- |
36+
| fp8_e4m3fn | 86% | ≈ 94s | ≈ 71s |
37+
| fp8_e4m3fn 使用 lightx2v 8步 LoRA | 86% | ≈ 55s | ≈ 34s |
38+
| 蒸馏版 fp8_e4m3fn | 86% | ≈ 69s | ≈ 36s |
3539

3640
### 1. 工作流文件
3741

@@ -48,47 +52,56 @@ import UpdateReminder from '/snippets/zh/tutorials/update-reminder.mdx'
4852
</a>
4953
### 2. 模型下载
5054

51-
**ComfyUI 提供的版本**
55+
**你可以在 ComfyOrg 仓库找到的版本**
5256
- Qwen-Image_bf16 (40.9 GB)
5357
- Qwen-Image_fp8 (20.4 GB)
5458
- 蒸馏版本 (非官方,仅需 15 步)
5559

5660

5761
所有模型均可在 [Huggingface](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main) 或者 [魔搭](https://modelscope.cn/models/Comfy-Org/Qwen-Image_ComfyUI/files) 找到
5862

59-
**Diffusion Model**
63+
**Diffusion model**
6064

61-
[qwen_image_fp8_e4m3fn.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/diffusion_models/qwen_image_fp8_e4m3fn.safetensors)
65+
- [qwen_image_fp8_e4m3fn.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/diffusion_models/qwen_image_fp8_e4m3fn.safetensors)
6266

63-
下面的模型为非官方仅需 15 步的蒸馏版本
64-
[蒸馏版本](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/non_official/diffusion_models)
65-
- [qwen_image_distill_full_bf16.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/non_official/diffusion_models/qwen_image_distill_full_bf16.safetensors) 40.9 GB
66-
- [qwen_image_distill_full_fp8.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/non_official/diffusion_models/qwen_image_distill_full_fp8_e4m3fn.safetensors) 20.4 GB
67+
Qwen_image_distill
68+
69+
- [qwen_image_distill_full_fp8_e4m3fn.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/non_official/diffusion_models/qwen_image_distill_full_fp8_e4m3fn.safetensors)
70+
- [qwen_image_distill_full_bf16.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/non_official/diffusion_models/qwen_image_distill_full_bf16.safetensors)
6771

6872
<Note>
6973
- 蒸馏版本原始作者建议在 15 步 cfg 1.0
7074
- 经测试该蒸馏版本在 10 步 cfg 1.0 下表现良好,根据你想要的图像类型选择 euler 或 res_multistep
7175
</Note>
7276

73-
**Text Encoder**
77+
**LoRA**
78+
79+
- [Qwen-Image-Lightning-8steps-V1.0.safetensors](https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Lightning-8steps-V1.0.safetensors)
7480

75-
[qwen_2.5_vl_7b_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/text_encoders/qwen_2.5_vl_7b_fp8_scaled.safetensors)
81+
**Text encoder**
82+
83+
- [qwen_2.5_vl_7b_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/text_encoders/qwen_2.5_vl_7b_fp8_scaled.safetensors)
7684

7785
**VAE**
7886

79-
[qwen_image_vae.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/vae/qwen_image_vae.safetensors)
87+
- [qwen_image_vae.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/vae/qwen_image_vae.safetensors)
8088

89+
模型保存位置
8190

8291
```
8392
📂 ComfyUI/
8493
├── 📂 models/
8594
│ ├── 📂 diffusion_models/
86-
│ │ └── qwen_image_fp8_e4m3fn.safetensors
95+
│ │ ├── qwen_image_fp8_e4m3fn.safetensors
96+
│ │ └── qwen_image_distill_full_fp8_e4m3fn.safetensors ## 蒸馏版
97+
│ ├── 📂 loras/
98+
│ │ └── Qwen-Image-Lightning-8steps-V1.0.safetensors ## 8步加速 LoRA 模型
8799
│ ├── 📂 vae/
88100
│ │ └── qwen_image_vae.safetensors
89101
│ └── 📂 text_encoders/
90102
│ └── qwen_2.5_vl_7b_fp8_scaled.safetensors
91103
```
104+
92105
### 3. 按步骤完成工作流
93106

94107
![步骤图](/images/tutorial/image/qwen/image_qwen_image-guide.jpg)
@@ -98,4 +111,10 @@ import UpdateReminder from '/snippets/zh/tutorials/update-reminder.mdx'
98111
3. 确保 `Load VAE`节点中加载了`qwen_image_vae.safetensors`
99112
4. 确保 `EmptySD3LatentImage`节点中设置好了图片的尺寸
100113
5.`CLIP Text Encoder`节点中设置好提示词,目前经过测试目前至少支持:英语、中文、韩语、日语、意大利语等
101-
6. 点击 `Queue` 按钮,或者使用快捷键 `Ctrl(cmd) + Enter(回车)` 来运行工作流
114+
6. 如果需要启用 lightx2v 的 8 步加速 LoRA ,请选中后用 `Ctrl + B` 启用该节点,并按 序号`8` 处的设置参数修改 Ksampler 的设置设置
115+
7. 点击 `Queue` 按钮,或者使用快捷键 `Ctrl(cmd) + Enter(回车)` 来运行工作流
116+
8. 对于不同版本的模型和工作流的对应 KSampler 的参数设置
117+
118+
<Note>
119+
蒸馏版模型和 lightx2v 的 8 步加速 LoRA 似乎不能同时使用,你可以测试具体的组合参数来验证组合使用的方式是否可行
120+
</Note>

0 commit comments

Comments
 (0)