|
| 1 | +--- |
| 2 | +title: "Wan ATI ComfyUI Native Workflow Tutorial" |
| 3 | +description: "Using trajectory control for video generation." |
| 4 | +sidebarTitle: "WAN ATI" |
| 5 | +--- |
| 6 | + |
| 7 | +import UpdateReminder from '/snippets/en/tutorials/update-reminder.mdx' |
| 8 | + |
| 9 | + |
| 10 | +**ATI (Any Trajectory Instruction)** is a controllable video generation framework proposed by the ByteDance team. ATI is implemented based on Wan2.1 and supports unified control of objects, local regions, and camera motion in videos through arbitrary trajectory instructions. |
| 11 | + |
| 12 | +Project URL: [https://github.com/bytedance/ATI](https://github.com/bytedance/ATI) |
| 13 | + |
| 14 | +## Key Features |
| 15 | + |
| 16 | +- **Unified Motion Control**: Supports trajectory control for multiple motion types including objects, local regions, and camera movements. |
| 17 | +- **Interactive Trajectory Editor**: Visual tool that allows users to freely draw and edit motion trajectories on images. |
| 18 | +- **Wan2.1 Compatible**: Based on the official Wan2.1 implementation, compatible with environments and model structures. |
| 19 | +- **Rich Visualization Tools**: Supports visualization of input trajectories, output videos, and trajectory overlays. |
| 20 | + |
| 21 | + |
| 22 | +## WAN ATI Trajectory Control Workflow Example |
| 23 | + |
| 24 | +<UpdateReminder /> |
| 25 | + |
| 26 | +### 1. Workflow Download |
| 27 | + |
| 28 | +Download the video below and drag it into ComfyUI to load the corresponding workflow |
| 29 | +<video |
| 30 | + controls |
| 31 | + className="w-full aspect-video" |
| 32 | + src="https://raw.githubusercontent.com/Comfy-Org/example_workflows/refs/heads/main/video/wan/ati/wan_ati.mp4" |
| 33 | +></video> |
| 34 | + |
| 35 | +We will use the following image as input: |
| 36 | + |
| 37 | + |
| 38 | +### 2. Model Download |
| 39 | + |
| 40 | +If you haven't successfully downloaded the model files from the workflow, you can try downloading them manually using the links below |
| 41 | + |
| 42 | +**Diffusion Model** |
| 43 | +- [Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors](https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors) |
| 44 | + |
| 45 | +**VAE** |
| 46 | +- [wan_2.1_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors?download=true) |
| 47 | + |
| 48 | +**Text encoders** Chose one of following model |
| 49 | +- [umt5_xxl_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp16.safetensors?download=true) |
| 50 | +- [umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors?download=true) |
| 51 | + |
| 52 | +**clip_vision** |
| 53 | +- [clip_vision_h.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/clip_vision/clip_vision_h.safetensors) |
| 54 | + |
| 55 | +File save location |
| 56 | +``` |
| 57 | +ComfyUI/ |
| 58 | +├───📂 models/ |
| 59 | +│ ├───📂 diffusion_models/ |
| 60 | +│ │ └───Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors |
| 61 | +│ ├───📂 text_encoders/ |
| 62 | +│ │ └─── umt5_xxl_fp8_e4m3fn_scaled.safetensors # or other version |
| 63 | +│ ├───📂 clip_vision/ |
| 64 | +│ │ └─── clip_vision_h.safetensors |
| 65 | +│ └───📂 vae/ |
| 66 | +│ └── wan_2.1_vae.safetensors |
| 67 | +``` |
| 68 | + |
| 69 | +### 3. Complete the workflow execution step by step |
| 70 | + |
| 71 | + |
| 72 | + |
| 73 | +Please follow the numbered steps in the image to ensure smooth execution of the corresponding workflow |
| 74 | + |
| 75 | +1. Ensure the `Load Diffusion Model` node has loaded the `Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors` model |
| 76 | +2. Ensure the `Load CLIP` node has loaded the `umt5_xxl_fp8_e4m3fn_scaled.safetensors` model |
| 77 | +3. Ensure the `Load VAE` node has loaded the `wan_2.1_vae.safetensors` model |
| 78 | +4. Ensure the `Load CLIP Vision` node has loaded the `clip_vision_h.safetensors` model |
| 79 | +5. Upload the provided input image in the `Load Image` node |
| 80 | +6. Trajectory editing: Currently there is no corresponding trajectory editor in ComfyUI yet, you can use the following link to complete trajectory editing |
| 81 | + - [Online Trajectory Editing Tool](https://comfyui-wiki.github.io/Trajectory-Annotation-Tool/) |
| 82 | +7. If you need to modify the prompts (positive and negative), please make changes in the `CLIP Text Encoder` node numbered `5` |
| 83 | +8. Click the `Run` button, or use the shortcut `Ctrl(cmd) + Enter` to execute video generation |
0 commit comments