diff --git a/tutorials/3d/hunyuan3D-2.mdx b/tutorials/3d/hunyuan3D-2.mdx index d87a366d1..38abbb925 100644 --- a/tutorials/3d/hunyuan3D-2.mdx +++ b/tutorials/3d/hunyuan3D-2.mdx @@ -47,6 +47,10 @@ In the Hunyuan3D-2mv workflow, we'll use multi-view images to generate a 3D mode + +

Run on Comfy Cloud

+
+ ### 1. Workflow Please download the images below and drag into ComfyUI to load the workflow. @@ -90,6 +94,10 @@ If you need to add more views, make sure to load other view images in the `Hunyu In the Hunyuan3D-2mv-turbo workflow, we'll use the Hunyuan3D-2mv-turbo model to generate 3D models. This model is a step distillation version of Hunyuan3D-2mv, allowing for faster 3D model generation. In this version of the workflow, we set `cfg` to 1.0 and add a `flux guidance` node to control the `distilled cfg` generation. + +

Run on Comfy Cloud

+
+ ### 1. Workflow Please download the images below and drag into ComfyUI to load the workflow. @@ -128,6 +136,10 @@ ComfyUI/ In the Hunyuan3D-2 workflow, we'll use the Hunyuan3D-2 model to generate 3D models. This model is not a multi-view model. In this workflow, we use the `Hunyuan3Dv2Conditioning` node instead of the `Hunyuan3Dv2ConditioningMultiView` node. + +

Run on Comfy Cloud

+
+ ### 1. Workflow Please download the image below and drag it into ComfyUI to load the workflow. diff --git a/tutorials/flux/flux-1-controlnet.mdx b/tutorials/flux/flux-1-controlnet.mdx index 33ae609d0..5d234636f 100644 --- a/tutorials/flux/flux-1-controlnet.mdx +++ b/tutorials/flux/flux-1-controlnet.mdx @@ -40,6 +40,10 @@ For image preprocessors, you can use the following custom nodes to complete imag ## FLUX.1-Canny-dev Complete Version Workflow + +

Run on Comfy Cloud

+
+ ### 1. Workflow and Asset Please download the workflow image below and drag it into ComfyUI to load the workflow @@ -102,6 +106,10 @@ Or use the following custom nodes to complete image preprocessing: ## FLUX.1-Depth-dev-lora Workflow + +

Run on Comfy Cloud

+
+ The LoRA version workflow builds on the complete version by adding the LoRA model. Compared to the [complete version of the Flux workflow](/tutorials/flux/flux-1-text-to-image), it adds nodes for loading and using the corresponding LoRA model. ### 1. Workflow and Asset diff --git a/tutorials/flux/flux-1-fill-dev.mdx b/tutorials/flux/flux-1-fill-dev.mdx index 5a72ee13d..48c005526 100644 --- a/tutorials/flux/flux-1-fill-dev.mdx +++ b/tutorials/flux/flux-1-fill-dev.mdx @@ -53,6 +53,14 @@ ComfyUI/ ### 1. Inpainting workflow and asset + +

Download Workflow Image

+
+ + +

Run on Comfy Cloud

+
+ Please download the image below and drag it into ComfyUI to load the corresponding workflow ![ComfyUI Flux.1 inpaint](https://raw.githubusercontent.com/Comfy-Org/example_workflows/main/flux/inpaint/flux_fill_inpaint.png) diff --git a/tutorials/flux/flux-1-kontext-dev.mdx b/tutorials/flux/flux-1-kontext-dev.mdx index c423af05e..b121cad9e 100644 --- a/tutorials/flux/flux-1-kontext-dev.mdx +++ b/tutorials/flux/flux-1-kontext-dev.mdx @@ -72,6 +72,10 @@ Model save location ## Flux.1 Kontext Dev Workflow + +

Run on Comfy Cloud

+
+ This workflow uses the `Load Image(from output)` node to load the image to be edited, making it more convenient for you to access the edited image for multiple rounds of editing. ### 1. Workflow and Input Image Download diff --git a/tutorials/flux/flux-1-text-to-image.mdx b/tutorials/flux/flux-1-text-to-image.mdx index 0cd28f2dc..25706e6c0 100644 --- a/tutorials/flux/flux-1-text-to-image.mdx +++ b/tutorials/flux/flux-1-text-to-image.mdx @@ -46,6 +46,10 @@ If you can't download models from [black-forest-labs/FLUX.1-dev](https://hugging Please download the image below and drag it into ComfyUI to load the workflow. ![Flux Dev Original Version Workflow](https://raw.githubusercontent.com/Comfy-Org/example_workflows/main/flux/text-to-image/flux_dev_t5fp16.png) + +

Run on Comfy Cloud

+
+ #### 2. Manual Model Installation @@ -96,6 +100,10 @@ Please download the image below and drag it into ComfyUI to load the workflow. ![Flux Schnell Version Workflow](https://raw.githubusercontent.com/Comfy-Org/example_workflows/main/flux/text-to-image/flux_schnell_t5fp8.png) + +

Run on Comfy Cloud

+
+ #### 2. Manual Models Installation @@ -146,6 +154,10 @@ Please download the image below and drag it into ComfyUI to load the workflow. ![Flux Dev fp8 Checkpoint Version Workflow](https://raw.githubusercontent.com/Comfy-Org/example_workflows/main/flux/text-to-image/flux_dev_fp8.png) + +

Run on Comfy Cloud

+
+ Please download [flux1-dev-fp8.safetensors](https://huggingface.co/Comfy-Org/flux1-dev/resolve/main/flux1-dev-fp8.safetensors?download=true) and save it to the `ComfyUI/models/checkpoints/` directory. Ensure that the corresponding `Load Checkpoint` node loads `flux1-dev-fp8.safetensors`, and you can try to run the workflow. diff --git a/tutorials/flux/flux-1-uso.mdx b/tutorials/flux/flux-1-uso.mdx index 671fef13c..55cb0dde6 100644 --- a/tutorials/flux/flux-1-uso.mdx +++ b/tutorials/flux/flux-1-uso.mdx @@ -33,11 +33,15 @@ Download the image below and drag it into ComfyUI to load the corresponding work className="prose" target='_blank' href="https://raw.githubusercontent.com/Comfy-Org/workflow_templates/refs/heads/main/templates/flux1_dev_uso_reference_image_gen.json" -style={{ display: 'inline-block', backgroundColor: '#0078D6', color: '#ffffff', padding: '10px 20px', borderRadius: '8px', borderColor: "transparent", textDecoration: 'none', fontWeight: 'bold'}} +style={{ display: 'inline-block', backgroundColor: '#0078D6', color: '#ffffff', padding: '10px 20px', borderRadius: '8px', borderColor: "transparent", textDecoration: 'none', fontWeight: 'bold', marginRight: '10px'}} >

Download JSON Workflow

+ +

Run on Comfy Cloud

+
+ Use the image below as an input image. ![input](https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/flux/bytedance-uso/input.png) diff --git a/tutorials/flux/flux-2-dev.mdx b/tutorials/flux/flux-2-dev.mdx index 876066365..be66852b6 100644 --- a/tutorials/flux/flux-2-dev.mdx +++ b/tutorials/flux/flux-2-dev.mdx @@ -33,7 +33,7 @@ We are using quantized weights in this workflow. The original FLUX.2 repository -

Run on ComfyUI Cloud

+

Run on Comfy Cloud

## Model links diff --git a/tutorials/flux/flux1-krea-dev.mdx b/tutorials/flux/flux1-krea-dev.mdx index 5e94860da..6a434e27b 100644 --- a/tutorials/flux/flux1-krea-dev.mdx +++ b/tutorials/flux/flux1-krea-dev.mdx @@ -29,10 +29,14 @@ This model is released under the [flux-1-dev-non-commercial-license](https://hug Download the image or JSON below and drag it into ComfyUI to load the corresponding workflow ![Flux Krea Dev Workflow](https://raw.githubusercontent.com/Comfy-Org/example_workflows/main/flux/krea/flux1_krea_dev.png) - +

Download JSON Workflow

+ +

Run on Comfy Cloud

+
+ #### 2. Manual Model Installation Please download the following model files: diff --git a/tutorials/image/hidream/hidream-e1.mdx b/tutorials/image/hidream/hidream-e1.mdx index ac159e323..7102c23b1 100644 --- a/tutorials/image/hidream/hidream-e1.mdx +++ b/tutorials/image/hidream/hidream-e1.mdx @@ -109,6 +109,10 @@ Follow these steps to run the workflow: ## HiDream E1 ComfyUI Native Workflow Example + +

Run on Comfy Cloud

+
+ E1 is a model released on April 28, 2025. This model only supports 768*768 resolution. diff --git a/tutorials/image/hidream/hidream-i1.mdx b/tutorials/image/hidream/hidream-i1.mdx index f1fb37416..36f431197 100644 --- a/tutorials/image/hidream/hidream-i1.mdx +++ b/tutorials/image/hidream/hidream-i1.mdx @@ -94,6 +94,10 @@ Model file save location ``` ### HiDream-I1 Full Version Workflow + +

Run on Comfy Cloud

+
+ #### 1. Model File Download Please select the appropriate version based on your hardware. Click the link and download the corresponding model file to save it to the `ComfyUI/models/diffusion_models/` folder. @@ -128,6 +132,10 @@ Complete the workflow execution step by step ### HiDream-I1 Dev Version Workflow + +

Run on Comfy Cloud

+
+ #### 1. Model File Download Please select the appropriate version based on your hardware, click the link and download the corresponding model file to save to the `ComfyUI/models/diffusion_models/` folder. @@ -160,6 +168,10 @@ Complete the workflow execution step by step ### HiDream-I1 Fast Version Workflow + +

Run on Comfy Cloud

+
+ #### 1. Model File Download Please select the appropriate version based on your hardware, click the link and download the corresponding model file to save to the `ComfyUI/models/diffusion_models/` folder. diff --git a/tutorials/image/omnigen/omnigen2.mdx b/tutorials/image/omnigen/omnigen2.mdx index 275918a02..433c0bf33 100644 --- a/tutorials/image/omnigen/omnigen2.mdx +++ b/tutorials/image/omnigen/omnigen2.mdx @@ -57,6 +57,10 @@ File save location: ### 1. Download Workflow File + +

Run on Comfy Cloud

+
+ ![Text-to-Image Workflow](https://raw.githubusercontent.com/Comfy-Org/example_workflows/main/image/omnigen2/image_omnigen2_t2i.png) ### 2. Complete Workflow Step by Step @@ -81,6 +85,10 @@ OmniGen2 has rich image editing capabilities and supports adding text to images ### 1. Download Workflow File + +

Run on Comfy Cloud

+
+ ![Text-to-Image Workflow](https://raw.githubusercontent.com/Comfy-Org/example_workflows/main/image/omnigen2/image_omnigen2_image_edit.png) Download the image below, which we will use as the input image. diff --git a/tutorials/image/qwen/qwen-image-2512.mdx b/tutorials/image/qwen/qwen-image-2512.mdx index 9d6db30e7..86da13449 100644 --- a/tutorials/image/qwen/qwen-image-2512.mdx +++ b/tutorials/image/qwen/qwen-image-2512.mdx @@ -36,6 +36,10 @@ import UpdateReminder from '/snippets/tutorials/update-reminder.mdx' + + Run on Comfy Cloud + + ### 1. Workflow file After updating ComfyUI, you can find the workflow file from the templates, or drag the workflow below into ComfyUI to load it. diff --git a/tutorials/image/qwen/qwen-image.mdx b/tutorials/image/qwen/qwen-image.mdx index f24640640..636fe2ba4 100644 --- a/tutorials/image/qwen/qwen-image.mdx +++ b/tutorials/image/qwen/qwen-image.mdx @@ -48,6 +48,10 @@ Currently Qwen-Image has multiple ControlNet support options available: + + Run on Comfy Cloud + + There are three different models used in the workflow attached to this document: 1. Qwen-Image original model fp8_e4m3fn 2. 8-step accelerated version: Qwen-Image original model fp8_e4m3fn with lightx2v 8-step LoRA @@ -150,6 +154,9 @@ Qwen_image_distill This is a ControlNet model, so you can use it as normal ControlNet. + + Run on Comfy Cloud + ### 1. Workflow and Input Images @@ -202,6 +209,10 @@ ComfyUI/ ## Qwen Image ControlNet DiffSynth-ControlNets Model Patches Workflow + + Run on Comfy Cloud + + This model is actually not a ControlNet, but a Model patch that supports three different control modes: canny, depth, and inpaint. Original model address: [DiffSynth-Studio/Qwen-Image ControlNet](https://www.modelscope.cn/collections/Qwen-Image-ControlNet-6157b44e89d444) @@ -269,6 +280,10 @@ For the Inpaint model, it requires using the [Mask Editor](/interface/maskeditor ## Qwen Image Union ControlNet LoRA Workflow + + Run on Comfy Cloud + + Original model address: [DiffSynth-Studio/Qwen-Image-In-Context-Control-Union](https://www.modelscope.cn/models/DiffSynth-Studio/Qwen-Image-In-Context-Control-Union/) Comfy Org rehost address: [qwen_image_union_diffsynth_lora.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image-DiffSynth-ControlNets/blob/main/split_files/loras/qwen_image_union_diffsynth_lora.safetensors): Image structure control LoRA supporting canny, depth, pose, lineart, softedge, normal, openpose diff --git a/tutorials/video/ltxv.mdx b/tutorials/video/ltxv.mdx index f7de7a746..479a025b5 100644 --- a/tutorials/video/ltxv.mdx +++ b/tutorials/video/ltxv.mdx @@ -22,6 +22,10 @@ Drag the video directly into ComfyUI to run the workflow. Allows you to control the video with a first [frame image](https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/ltxv/i2v/girl1.png). + +

Run on Comfy Cloud

+
+ LTX-Video Image to Video diff --git a/tutorials/video/wan/wan2-2-animate.mdx b/tutorials/video/wan/wan2-2-animate.mdx index 66fc8a575..ac2abcddf 100644 --- a/tutorials/video/wan/wan2-2-animate.mdx +++ b/tutorials/video/wan/wan2-2-animate.mdx @@ -50,6 +50,10 @@ Download the following workflow file and drag it into ComfyUI to load the workfl

Download JSON Workflow

+ +

Run on Comfy Cloud

+
+ Download materials below as input: **Reference Image:** diff --git a/tutorials/video/wan/wan2-2-fun-inp.mdx b/tutorials/video/wan/wan2-2-fun-inp.mdx index 37545ec07..981863515 100644 --- a/tutorials/video/wan/wan2-2-fun-inp.mdx +++ b/tutorials/video/wan/wan2-2-fun-inp.mdx @@ -58,6 +58,10 @@ Or, after updating ComfyUI to the latest version, download the workflow below an

Download JSON Workflow

+ +

Run on Comfy Cloud

+
+ Use the following materials as the start and end frames ![Wan2.2 Fun Control ComfyUI Workflow Start Frame Material](https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/video/wan/wan2.2_fun_inp/start_image.png) diff --git a/tutorials/video/wan/wan2-2-s2v.mdx b/tutorials/video/wan/wan2-2-s2v.mdx index cdbc02c4e..36a587e73 100644 --- a/tutorials/video/wan/wan2-2-s2v.mdx +++ b/tutorials/video/wan/wan2-2-s2v.mdx @@ -37,6 +37,10 @@ Download the following workflow file and drag it into ComfyUI to load the workfl

Download JSON Workflow

+ +

Run on Comfy Cloud

+
+ Download the following image and audio as input: ![input](https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/video/wan/wan2.2_s2v/input.jpg) diff --git a/tutorials/video/wan/wan2_2.mdx b/tutorials/video/wan/wan2_2.mdx index 44ba494ae..62190cd74 100644 --- a/tutorials/video/wan/wan2_2.mdx +++ b/tutorials/video/wan/wan2_2.mdx @@ -93,6 +93,10 @@ Please update your ComfyUI to the latest version, and through the menu `Workflow

Download JSON Workflow File

+ +

Run on Comfy Cloud

+
+ ### 2. Manually Download Models **Diffusion Model** @@ -144,6 +148,10 @@ Or update your ComfyUI to the latest version, then download the following video

Download JSON Workflow File

+ +

Run on Comfy Cloud

+
+ ### 2. Manually Download Models **Diffusion Model** @@ -196,6 +204,11 @@ Or update your ComfyUI to the latest version, then download the following video

Download JSON Workflow File

+ + +

Run on Comfy Cloud

+
+ You can use the following image as input: ![Input Image](https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/video/wan/2.2/input.jpg) @@ -251,6 +264,10 @@ Download the video or the JSON workflow below and open it in ComfyUI.

Download JSON Workflow

+ +

Run on Comfy Cloud

+
+ Download the following images as input materials: ![Input Material](https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/video/wan/2.2/wan22_14B_flf2v_start_image.png) diff --git a/zh-CN/tutorials/3d/hunyuan3D-2.mdx b/zh-CN/tutorials/3d/hunyuan3D-2.mdx index d7ad12f65..e9b1378f4 100644 --- a/zh-CN/tutorials/3d/hunyuan3D-2.mdx +++ b/zh-CN/tutorials/3d/hunyuan3D-2.mdx @@ -48,6 +48,10 @@ Hunyuan3D-2mv 工作流中,我们将使用多视角的图片来生成3D模型 + +

Run on Comfy Cloud

+
+ ### 1. 工作流 请下载下面的图片,并拖入 ComfyUI 以加载工作流, @@ -92,6 +96,10 @@ ComfyUI/ Hunyuan3D-2mv-turbo 工作流中,我们将使用 Hunyuan3D-2mv-turbo 模型来生成3D模型,这个模型是 Hunyuan3D-2mv 的分步蒸馏(Step Distillation)版本,可以更快地生成3D模型,在这个版本的工作流中我们设置 `cfg` 为 1.0 并添加 `flux guidance` 节点来控制 `distilled cfg` 的生成。 + +

Run on Comfy Cloud

+
+ ### 1. 工作流 请下载下面的图片,并拖入 ComfyUI 以加载工作流, @@ -128,6 +136,10 @@ ComfyUI/ Hunyuan3D-2 工作流中,我们将使用 Hunyuan3D-2 模型来生成3D模型,这个模型不是一个多视角的模型,在这个工作流中,我们使用`Hunyuan3Dv2Conditioning` 节点替换掉 `Hunyuan3Dv2ConditioningMultiView` 节点。 + +

Run on Comfy Cloud

+
+ ### 1. 工作流 请下载下面的图片,并拖入 ComfyUI 以加载工作流 diff --git a/zh-CN/tutorials/flux/flux-1-controlnet.mdx b/zh-CN/tutorials/flux/flux-1-controlnet.mdx index ea296ec6e..ce9c3b053 100644 --- a/zh-CN/tutorials/flux/flux-1-controlnet.mdx +++ b/zh-CN/tutorials/flux/flux-1-controlnet.mdx @@ -35,6 +35,10 @@ Metadata 中包含工作流 json 的图片可直接拖入 ComfyUI 或使用菜 ## FLUX.1-Canny-dev 完整版工作流 + +

Run on Comfy Cloud

+
+ ### 1. 工作流及相关素材 请下载下面的工作流图片,并拖入 ComfyUI 以加载工作流 @@ -96,6 +100,10 @@ ComfyUI/ ## FLUX.1-Depth-dev-lora 工作流 + +

Run on Comfy Cloud

+
+ LoRA 版本的工作流是在完整版本的基础上,添加了 LoRA 模型,相对于[完整版本的 Flux 工作流](/zh-CN/tutorials/flux/flux-1-text-to-image),增加了对应 LoRA 模型的加载使用节点。 ### 1. 工作流及相关素材 diff --git a/zh-CN/tutorials/flux/flux-1-fill-dev.mdx b/zh-CN/tutorials/flux/flux-1-fill-dev.mdx index c4a2fde4b..5b55c6cca 100644 --- a/zh-CN/tutorials/flux/flux-1-fill-dev.mdx +++ b/zh-CN/tutorials/flux/flux-1-fill-dev.mdx @@ -52,6 +52,14 @@ ComfyUI/ ### 1. Inpainting 工作流及相关素材 + +

下载工作流图片

+
+ + +

在 Comfy Cloud 上运行

+
+ 请下载下面的图片,并拖入 ComfyUI 以加载对应的工作流 ![ComfyUI Flux.1 inpaint](https://raw.githubusercontent.com/Comfy-Org/example_workflows/main/flux/inpaint/flux_fill_inpaint.png) diff --git a/zh-CN/tutorials/flux/flux-1-kontext-dev.mdx b/zh-CN/tutorials/flux/flux-1-kontext-dev.mdx index 18b28d5b7..2677c4dc5 100644 --- a/zh-CN/tutorials/flux/flux-1-kontext-dev.mdx +++ b/zh-CN/tutorials/flux/flux-1-kontext-dev.mdx @@ -73,6 +73,10 @@ FLUX.1 Kontext 是 Black Forest Labs 推出的突破性多模态图像编辑模 ## Flux.1 Kontext Dev 工作流 + +

Run on Comfy Cloud

+
+ 这个工作流使用了 `Load Image(from output)` 节点来加载需要编辑的图像,可以让你更方便地获取到编辑后的图像,从而进行多轮次编辑 ### 1. 工作流及输入图片下载 diff --git a/zh-CN/tutorials/flux/flux-1-text-to-image.mdx b/zh-CN/tutorials/flux/flux-1-text-to-image.mdx index 991d98c7a..bd2516445 100644 --- a/zh-CN/tutorials/flux/flux-1-text-to-image.mdx +++ b/zh-CN/tutorials/flux/flux-1-text-to-image.mdx @@ -46,6 +46,10 @@ Flux 以其卓越的画面质量和灵活性而闻名,能够生成高质量、 请下载下面的图片,并拖入 ComfyUI 中加载工作流。 ![Flux Dev 原始版本工作流](https://raw.githubusercontent.com/Comfy-Org/example_workflows/main/flux/text-to-image/flux_dev_t5fp16.png) + +

在 Comfy Cloud 上运行

+
+ #### 2. 手动安装模型 @@ -97,6 +101,10 @@ ComfyUI/ ![Flux Schnell 版本工作流](https://raw.githubusercontent.com/Comfy-Org/example_workflows/main/flux/text-to-image/flux_schnell_t5fp8.png) + +

在 Comfy Cloud 上运行

+
+ #### 2. 手动安装模型 @@ -146,6 +154,10 @@ fp8 版本是对 flux1 原版 fp16 版本的量化版本,在一定程度上这 ![Flux Dev fp8 Checkpoint 版本工作流](https://raw.githubusercontent.com/Comfy-Org/example_workflows/main/flux/text-to-image/flux_dev_fp8.png) + +

在 Comfy Cloud 上运行

+
+ 请下载 [flux1-dev-fp8.safetensors](https://huggingface.co/Comfy-Org/flux1-dev/resolve/main/flux1-dev-fp8.safetensors?download=true)并保存至 `ComfyUI/models/Checkpoints/` 目录下。 确保对应的 `Load Checkpoint` 节点加载了 `flux1-dev-fp8.safetensors`,即可测试运行。 diff --git a/zh-CN/tutorials/flux/flux-1-uso.mdx b/zh-CN/tutorials/flux/flux-1-uso.mdx index 62500d24c..eef04d14c 100644 --- a/zh-CN/tutorials/flux/flux-1-uso.mdx +++ b/zh-CN/tutorials/flux/flux-1-uso.mdx @@ -34,11 +34,15 @@ USO 支持三种主要方法: className="prose" target='_blank' href="https://raw.githubusercontent.com/Comfy-Org/workflow_templates/refs/heads/main/templates/flux1_dev_uso_reference_image_gen.json" -style={{ display: 'inline-block', backgroundColor: '#0078D6', color: '#ffffff', padding: '10px 20px', borderRadius: '8px', borderColor: "transparent", textDecoration: 'none', fontWeight: 'bold'}} +style={{ display: 'inline-block', backgroundColor: '#0078D6', color: '#ffffff', padding: '10px 20px', borderRadius: '8px', borderColor: "transparent", textDecoration: 'none', fontWeight: 'bold', marginRight: '10px'}} >

下载 JSON 工作流

+ +

在 Comfy Cloud 上运行

+
+ 使用下面的图片作为输入 ![输入图像](https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/flux/bytedance-uso/input.png) diff --git a/zh-CN/tutorials/flux/flux-2-dev.mdx b/zh-CN/tutorials/flux/flux-2-dev.mdx index 667c3b524..286cf9f9a 100644 --- a/zh-CN/tutorials/flux/flux-2-dev.mdx +++ b/zh-CN/tutorials/flux/flux-2-dev.mdx @@ -33,7 +33,7 @@ import UpdateReminder from '/snippets/zh/tutorials/update-reminder.mdx' -

在 ComfyUI Cloud 上运行

+

在 Comfy Cloud 上运行

## 模型链接 diff --git a/zh-CN/tutorials/flux/flux1-krea-dev.mdx b/zh-CN/tutorials/flux/flux1-krea-dev.mdx index 0bd2b7086..41ab10b3d 100644 --- a/zh-CN/tutorials/flux/flux1-krea-dev.mdx +++ b/zh-CN/tutorials/flux/flux1-krea-dev.mdx @@ -29,10 +29,14 @@ import UpdateReminder from '/snippets/zh/tutorials/update-reminder.mdx' 下载下面的图片或JSON,并拖入 ComfyUI 以加载对应工作流 ![Flux Krea Dev 工作流](https://raw.githubusercontent.com/Comfy-Org/example_workflows/main/flux/krea/flux1_krea_dev.png) - +

下载 JSON 格式工作流

+ +

在 Comfy Cloud 上运行

+
+ #### 2. 模型链接 **Diffusion model** 下面两个模型选择其中一个版本即可 diff --git a/zh-CN/tutorials/image/hidream/hidream-e1.mdx b/zh-CN/tutorials/image/hidream/hidream-e1.mdx index 351576b83..e9e04cc41 100644 --- a/zh-CN/tutorials/image/hidream/hidream-e1.mdx +++ b/zh-CN/tutorials/image/hidream/hidream-e1.mdx @@ -107,6 +107,10 @@ E1.1 是于 2025年7月16日更新迭代的版本, 这个版本支持动态一 ## HiDream E1 ComfyUI 原生 工作流示例 + +

Run on Comfy Cloud

+
+ E1 是于 2025 年 4 月 28 日发布的,这个模型只支持 768*768 的分辨率 ### 1. HiDream-e1 工作流及相关素材 diff --git a/zh-CN/tutorials/image/hidream/hidream-i1.mdx b/zh-CN/tutorials/image/hidream/hidream-i1.mdx index e29b7d280..db61c5a15 100644 --- a/zh-CN/tutorials/image/hidream/hidream-i1.mdx +++ b/zh-CN/tutorials/image/hidream/hidream-i1.mdx @@ -88,6 +88,10 @@ HiDream-I1 是智象未来(HiDream-ai)于2025年4月7日正式开源的文生图 ### HiDream-I1 full 版本工作流 + +

Run on Comfy Cloud

+
+ #### 1. 模型文件下载 请根据你的硬件情况选择合适的版本,点击链接并下载对应的模型文件保存到 `ComfyUI/models/diffusion_models/` 文件夹下。 @@ -121,6 +125,10 @@ HiDream-I1 是智象未来(HiDream-ai)于2025年4月7日正式开源的文生图 ### HiDream-I1 dev 版本工作流 + +

Run on Comfy Cloud

+
+ #### 1. 模型文件下载 请根据你的硬件情况选择合适的版本,点击链接并下载对应的模型文件保存到 `ComfyUI/models/diffusion_models/` 文件夹下。 @@ -153,6 +161,10 @@ HiDream-I1 是智象未来(HiDream-ai)于2025年4月7日正式开源的文生图 ### HiDream-I1 fast 版本工作流 + +

Run on Comfy Cloud

+
+ #### 1. 模型文件下载 请根据你的硬件情况选择合适的版本,点击链接并下载对应的模型文件保存到 `ComfyUI/models/diffusion_models/` 文件夹下。 diff --git a/zh-CN/tutorials/image/omnigen/omnigen2.mdx b/zh-CN/tutorials/image/omnigen/omnigen2.mdx index 39a1250b9..911dcfeb6 100644 --- a/zh-CN/tutorials/image/omnigen/omnigen2.mdx +++ b/zh-CN/tutorials/image/omnigen/omnigen2.mdx @@ -58,6 +58,10 @@ OmniGen2 是一个强大且高效的统一多模态生成模型,总参数量 ### 1. 工作流文件下载 + +

在 Comfy Cloud 上运行

+
+ ![文生图工作流](https://raw.githubusercontent.com/Comfy-Org/example_workflows/main/image/omnigen2/image_omnigen2_t2i.png) ### 2. 按步骤完成工作流运行 @@ -82,6 +86,10 @@ OmniGen2 有丰富的图像编辑能力,并且支持为图像添加文本 ### 1. 工作流文件下载 + +

在 Comfy Cloud 上运行

+
+ ![输入图片](https://raw.githubusercontent.com/Comfy-Org/example_workflows/main/image/omnigen2/image_omnigen2_image_edit.png) 下载下面的图片,我们将使用它作为输入图片。 ![输入图片](https://raw.githubusercontent.com/Comfy-Org/example_workflows/main/image/omnigen2/input_fairy.png) diff --git a/zh-CN/tutorials/image/qwen/qwen-image-2512.mdx b/zh-CN/tutorials/image/qwen/qwen-image-2512.mdx index 60f83b98f..3016d84d6 100644 --- a/zh-CN/tutorials/image/qwen/qwen-image-2512.mdx +++ b/zh-CN/tutorials/image/qwen/qwen-image-2512.mdx @@ -36,6 +36,10 @@ import UpdateReminder from '/snippets/tutorials/update-reminder.mdx' + + 在 Comfy Cloud 上运行 + + ### 1. 工作流文件 更新 ComfyUI 后,您可以在模板中找到工作流文件,或将下面的工作流拖入 ComfyUI 加载。 diff --git a/zh-CN/tutorials/image/qwen/qwen-image.mdx b/zh-CN/tutorials/image/qwen/qwen-image.mdx index 24137f075..c63f683cc 100644 --- a/zh-CN/tutorials/image/qwen/qwen-image.mdx +++ b/zh-CN/tutorials/image/qwen/qwen-image.mdx @@ -51,6 +51,10 @@ import UpdateReminder from '/snippets/zh/tutorials/update-reminder.mdx' + + 在 Comfy Cloud 上运行 + + 在本篇文档所附工作流中使用的不同模型有三种 1. Qwen-Image 原版模型 fp8_e4m3fn 2. 8步加速版: Qwen-Image 原版模型 fp8_e4m3fn 使用 lightx2v 8步 LoRA, @@ -151,6 +155,10 @@ Qwen_image_distill 这是一个 ControlNet 模型 + + 在 Comfy Cloud 上运行 + + ### 1. 工作流及输入图片 下载下面的图片并拖入 ComfyUI 以加载工作流 @@ -207,6 +215,10 @@ ComfyUI/ ## Qwen Image ControlNet DiffSynth-ControlNets Model Patches 工作流 + + 在 Comfy Cloud 上运行 + + 这个模型实际上并不是一个 controlnet,而是一个 Model patch, 支持 canny、depth、inpaint 三种不同的控制模式 原始模型地址:[DiffSynth-Studio/Qwen-Image ControlNet](https://www.modelscope.cn/collections/Qwen-Image-ControlNet-6157b44e89d444) @@ -274,6 +286,10 @@ Comfy Org rehost 地址: [Qwen-Image-DiffSynth-ControlNets/model_patches](http ## Qwen Image union ControlNet LoRA 工作流 + + 在 Comfy Cloud 上运行 + + 原始模型地址:[DiffSynth-Studio/Qwen-Image-In-Context-Control-Union](https://www.modelscope.cn/models/DiffSynth-Studio/Qwen-Image-In-Context-Control-Union/) Comfy Org reshot 地址: [qwen_image_union_diffsynth_lora.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image-DiffSynth-ControlNets/blob/main/split_files/loras/qwen_image_union_diffsynth_lora.safetensors): 图像结构控制lora 支持 canny、depth、post、lineart、softedge、normal、openpose diff --git a/zh-CN/tutorials/video/ltxv.mdx b/zh-CN/tutorials/video/ltxv.mdx index 27c972dc2..00497a307 100644 --- a/zh-CN/tutorials/video/ltxv.mdx +++ b/zh-CN/tutorials/video/ltxv.mdx @@ -31,6 +31,10 @@ import UpdateReminder from '/snippets/zh/tutorials/update-reminder.mdx' 通过首帧图像控制视频生成:[示例首帧](https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/ltxv/i2v/girl1.png)。 + +

Run on Comfy Cloud

+
+ LTX-Video 图生视频工作流 ## 文生视频 diff --git a/zh-CN/tutorials/video/wan/wan2-2-animate.mdx b/zh-CN/tutorials/video/wan/wan2-2-animate.mdx index 7ad6a8f2f..2112a836b 100644 --- a/zh-CN/tutorials/video/wan/wan2-2-animate.mdx +++ b/zh-CN/tutorials/video/wan/wan2-2-animate.mdx @@ -42,6 +42,10 @@ Wan-Animate 是由 WAN 团队开发的一个统一的人物动画和替换框架

下载工作流

+ +

在 Comfy Cloud 上运行

+
+ 下载以下素材作为输入: **参考图像:** diff --git a/zh-CN/tutorials/video/wan/wan2-2-fun-inp.mdx b/zh-CN/tutorials/video/wan/wan2-2-fun-inp.mdx index f284a88fa..8b7967ec8 100644 --- a/zh-CN/tutorials/video/wan/wan2-2-fun-inp.mdx +++ b/zh-CN/tutorials/video/wan/wan2-2-fun-inp.mdx @@ -62,6 +62,10 @@ import UpdateReminder from '/snippets/zh/tutorials/update-reminder.mdx'

下载 JSON 格式工作流

+ +

Run on Comfy Cloud

+
+ 使用下面的素材作为首尾帧 ![Wan2.2 Fun Control ComfyUI 工作流起始帧素材](https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/video/wan/wan2.2_fun_inp/start_image.png) diff --git a/zh-CN/tutorials/video/wan/wan2-2-s2v.mdx b/zh-CN/tutorials/video/wan/wan2-2-s2v.mdx index a45daf724..f71d1d5ae 100644 --- a/zh-CN/tutorials/video/wan/wan2-2-s2v.mdx +++ b/zh-CN/tutorials/video/wan/wan2-2-s2v.mdx @@ -35,6 +35,10 @@ Wan2.2 S2V 模型仓库:[Hugging Face](https://huggingface.co/Wan-AI/Wan2.2-S2

Download JSON Workflow

+ +

Run on Comfy Cloud

+
+ 下载下面的图片及音频作为输入: ![input](https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/video/wan/wan2.2_s2v/input.jpg) diff --git a/zh-CN/tutorials/video/wan/wan2_2.mdx b/zh-CN/tutorials/video/wan/wan2_2.mdx index 2f7ab5e2e..5987ec064 100644 --- a/zh-CN/tutorials/video/wan/wan2_2.mdx +++ b/zh-CN/tutorials/video/wan/wan2_2.mdx @@ -91,6 +91,11 @@ Wan2.2 5B 版本配合 ComfyUI 原生 offloading功能,能很好地适配 8GB

下载 JSON 格式工作流

+ + +

Run on Comfy Cloud

+
+ ### 2. 手动下载模型 **Diffusion Model** @@ -142,6 +147,10 @@ ComfyUI/

下载 JSON 格式工作流

+ +

Run on Comfy Cloud

+
+ ### 2. 手动下载模型 **Diffusion Model** @@ -197,6 +206,10 @@ ComfyUI/

下载 JSON 格式工作流

+ +

Run on Comfy Cloud

+
+ 你可以使用下面的图片作为输入 ![输入图片](https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/video/wan/2.2/input.jpg) @@ -252,6 +265,10 @@ ComfyUI/

下载 JSON 格式工作流

+ +

Run on Comfy Cloud

+
+ 下载下面的素材作为输入 ![Input Material](https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/video/wan/2.2/wan22_14B_flf2v_start_image.png)