diff --git a/docs.json b/docs.json
index 16761296..7e0651e5 100644
--- a/docs.json
+++ b/docs.json
@@ -154,6 +154,7 @@
"pages": [
"tutorials/video/wan/wan-video",
"tutorials/video/wan/vace",
+ "tutorials/video/wan/wan-ati",
"tutorials/video/wan/fun-control",
"tutorials/video/wan/fun-camera",
"tutorials/video/wan/fun-inp",
@@ -693,11 +694,12 @@
"group": "万相视频",
"pages": [
"zh-CN/tutorials/video/wan/wan-video",
+ "zh-CN/tutorials/video/wan/vace",
+ "zh-CN/tutorials/video/wan/wan-ati",
"zh-CN/tutorials/video/wan/fun-control",
"zh-CN/tutorials/video/wan/fun-camera",
"zh-CN/tutorials/video/wan/fun-inp",
- "zh-CN/tutorials/video/wan/wan-flf",
- "zh-CN/tutorials/video/wan/vace"
+ "zh-CN/tutorials/video/wan/wan-flf"
]
},
{
diff --git a/images/tutorial/video/wan/wan_ati_guide.jpg b/images/tutorial/video/wan/wan_ati_guide.jpg
new file mode 100644
index 00000000..f405f958
Binary files /dev/null and b/images/tutorial/video/wan/wan_ati_guide.jpg differ
diff --git a/tutorials/video/wan/wan-ati.mdx b/tutorials/video/wan/wan-ati.mdx
new file mode 100644
index 00000000..e9613469
--- /dev/null
+++ b/tutorials/video/wan/wan-ati.mdx
@@ -0,0 +1,83 @@
+---
+title: "Wan ATI ComfyUI Native Workflow Tutorial"
+description: "Using trajectory control for video generation."
+sidebarTitle: "WAN ATI"
+---
+
+import UpdateReminder from '/snippets/en/tutorials/update-reminder.mdx'
+
+
+**ATI (Any Trajectory Instruction)** is a controllable video generation framework proposed by the ByteDance team. ATI is implemented based on Wan2.1 and supports unified control of objects, local regions, and camera motion in videos through arbitrary trajectory instructions.
+
+Project URL: [https://github.com/bytedance/ATI](https://github.com/bytedance/ATI)
+
+## Key Features
+
+- **Unified Motion Control**: Supports trajectory control for multiple motion types including objects, local regions, and camera movements.
+- **Interactive Trajectory Editor**: Visual tool that allows users to freely draw and edit motion trajectories on images.
+- **Wan2.1 Compatible**: Based on the official Wan2.1 implementation, compatible with environments and model structures.
+- **Rich Visualization Tools**: Supports visualization of input trajectories, output videos, and trajectory overlays.
+
+
+## WAN ATI Trajectory Control Workflow Example
+
+
+
+### 1. Workflow Download
+
+Download the video below and drag it into ComfyUI to load the corresponding workflow
+
+
+We will use the following image as input:
+
+
+### 2. Model Download
+
+If you haven't successfully downloaded the model files from the workflow, you can try downloading them manually using the links below
+
+**Diffusion Model**
+- [Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors](https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors)
+
+**VAE**
+- [wan_2.1_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors?download=true)
+
+**Text encoders** Chose one of following model
+- [umt5_xxl_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp16.safetensors?download=true)
+- [umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors?download=true)
+
+**clip_vision**
+- [clip_vision_h.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/clip_vision/clip_vision_h.safetensors)
+
+File save location
+```
+ComfyUI/
+├───📂 models/
+│ ├───📂 diffusion_models/
+│ │ └───Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors
+│ ├───📂 text_encoders/
+│ │ └─── umt5_xxl_fp8_e4m3fn_scaled.safetensors # or other version
+│ ├───📂 clip_vision/
+│ │ └─── clip_vision_h.safetensors
+│ └───📂 vae/
+│ └── wan_2.1_vae.safetensors
+```
+
+### 3. Complete the workflow execution step by step
+
+
+
+Please follow the numbered steps in the image to ensure smooth execution of the corresponding workflow
+
+1. Ensure the `Load Diffusion Model` node has loaded the `Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors` model
+2. Ensure the `Load CLIP` node has loaded the `umt5_xxl_fp8_e4m3fn_scaled.safetensors` model
+3. Ensure the `Load VAE` node has loaded the `wan_2.1_vae.safetensors` model
+4. Ensure the `Load CLIP Vision` node has loaded the `clip_vision_h.safetensors` model
+5. Upload the provided input image in the `Load Image` node
+6. Trajectory editing: Currently there is no corresponding trajectory editor in ComfyUI yet, you can use the following link to complete trajectory editing
+ - [Online Trajectory Editing Tool](https://comfyui-wiki.github.io/Trajectory-Annotation-Tool/)
+7. If you need to modify the prompts (positive and negative), please make changes in the `CLIP Text Encoder` node numbered `5`
+8. Click the `Run` button, or use the shortcut `Ctrl(cmd) + Enter` to execute video generation
\ No newline at end of file
diff --git a/zh-CN/tutorials/video/wan/wan-ati.mdx b/zh-CN/tutorials/video/wan/wan-ati.mdx
new file mode 100644
index 00000000..8518a10f
--- /dev/null
+++ b/zh-CN/tutorials/video/wan/wan-ati.mdx
@@ -0,0 +1,85 @@
+---
+title: "Wan ATI ComfyUI 原生工作流教程"
+description: "使用轨迹控制视频生成。"
+sidebarTitle: "WAN ATI"
+---
+
+import UpdateReminder from '/snippets/zh/tutorials/update-reminder.mdx'
+
+
+**ATI(Any Trajectory Instruction)** 是由字节跳动团队提出的可控视频生成框架。ATI 基于 Wan2.1 实现,支持通过任意轨迹指令对视频中的物体、局部区域及摄像机运动进行统一控制。
+
+项目地址:[https://github.com/bytedance/ATI](https://github.com/bytedance/ATI)
+
+## 主要特性
+
+- **统一运动控制**:支持物体、局部、摄像机等多种运动类型的轨迹控制。
+- **交互式轨迹编辑器**:可视化工具,用户可在图片上自由绘制、编辑运动轨迹。
+- **兼容 Wan2.1**:基于 Wan2.1 官方实现,环境和模型结构兼容。
+- **丰富的可视化工具**:支持输入轨迹、输出视频及轨迹可视化。
+
+
+## WAN ATI 轨迹控制工作流示例
+
+
+
+### 1. 工作流下载
+
+下载下面的视频并拖入 ComfyUI 中,以加载对应的工作流
+
+
+我们将使用下面的素材作为输入:
+
+
+### 2. 模型下载
+
+如果你没有成功下载工作流中的模型文件,可以尝试使用下面的链接手动下载
+
+**Diffusion Model**
+- [Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors](https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors)
+
+**VAE**
+- [wan_2.1_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors?download=true)
+
+**Text encoders** Chose one of following model
+- [umt5_xxl_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp16.safetensors?download=true)
+- [umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors?download=true)
+
+**clip_vision**
+- [clip_vision_h.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/clip_vision/clip_vision_h.safetensors)
+
+File save location
+
+```
+ComfyUI/
+├───📂 models/
+│ ├───📂 diffusion_models/
+│ │ └───Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors
+│ ├───📂 text_encoders/
+│ │ └─── umt5_xxl_fp8_e4m3fn_scaled.safetensors # or other version
+│ ├───📂 clip_vision/
+│ │ └─── clip_vision_h.safetensors
+│ └───📂 vae/
+│ └── wan_2.1_vae.safetensors
+```
+
+
+### 3. 按步骤完成工作流的运行
+
+
+
+请参照图片序号进行逐步确认,来保证对应工作流的顺利运行
+
+1. 确保`Load Diffusion Model`节点加载了 `Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors` 模型
+2. 确保`Load CLIP`节点加载了 `umt5_xxl_fp8_e4m3fn_scaled.safetensors` 模型
+3. 确保`Load VAE`节点加载了 `wan_2.1_vae.safetensors` 模型
+4. 确保`Load CLIP Vision`节点加载了 `clip_vision_h.safetensors` 模型
+5. 在 `Load Image` 节点上传提供的输入图片
+6. 轨迹编辑: 目前 ComfyUI 中还未有对应的轨迹编辑器,你可以使用下面的链接来完成轨迹编辑
+ - [在线轨迹编辑工具](https://comfyui-wiki.github.io/Trajectory-Annotation-Tool/)
+7. 如果你需要修改提示词(正向及负向)请在序号`5` 的 `CLIP Text Encoder` 节点中进行修改
+8. 点击 `Run` 按钮,或者使用快捷键 `Ctrl(cmd) + Enter(回车)` 来执行视频生成
\ No newline at end of file