You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -57,6 +57,16 @@ The model supports text-to-image, image-to-video, keyframe-based animation, vide
57
57
58
58
# News
59
59
60
+
## May, 14th, 2025: New distilled model 13B v0.9.7:
61
+
- Release a new 13B distilled model [ltxv-13b-0.9.7-distilled](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled.safetensors)
62
+
* Amazing for iterative work - generates HD videos in 10 seconds, with low-res preview after just 3 seconds (on H100)!
63
+
* Does not require classifier-free guidance and spatio-temporal guidance.
64
+
* Supports sampling with 8 (recommended), or less diffusion steps.
65
+
* Also released a LoRA version of the distilled model, [ltxv-13b-0.9.7-distilled-lora128](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled-lora128.safetensors)
66
+
* Requires only 1GB of VRAM
67
+
* Can be used with the full 13B model for fast inference
68
+
- Release a new quantized distilled model [ltxv-13b-0.9.7-distilled-fp8](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled-fp8.safetensors) for *real-time* generation (on H100) with even less VRAM (Supported in the [official CompfyUI workflow](https://github.com/Lightricks/ComfyUI-LTXVideo/))
69
+
60
70
## May, 5th, 2025: New model 13B v0.9.7:
61
71
- Release a new 13B model [ltxv-13b-0.9.7-dev](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.safetensors)
62
72
- Release a new quantized model [ltxv-13b-0.9.7-dev-fp8](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev-fp8.safetensors) for faster inference with less VRam (Supported in the [official CompfyUI workflow](https://github.com/Lightricks/ComfyUI-LTXVideo/))
@@ -72,7 +82,7 @@ The model supports text-to-image, image-to-video, keyframe-based animation, vide
72
82
- Release a new distilled model [ltxv-2b-0.9.6-distilled-04-25](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-2b-0.9.6-distilled-04-25.safetensors)
73
83
* 15x faster inference than non-distilled model.
74
84
* Does not require classifier-free guidance and spatio-temporal guidance.
75
-
* Supports sampling with 8 (recommended), 4, 2 or 1 diffusion steps.
85
+
* Supports sampling with 8 (recommended), or less diffusion steps.
76
86
- Improved prompt adherence, motion quality and fine details.
77
87
- New default resolution and FPS: 1216 × 704 pixels at 30 FPS
78
88
* Still real time on H100 with the distilled model.
@@ -114,21 +124,26 @@ The model supports text-to-image, image-to-video, keyframe-based animation, vide
114
124
- Support text-to-video and image-to-video generation
115
125
116
126
117
-
# Models
127
+
# Models & Workflows
118
128
119
-
| Model | Version | Notes | inference.py config | ComfyUI workflow (Recommended) |
| ltxv-13b-0.9.7-dev | Highest quality, requires more VRAM |[ltxv-13b-0.9.7-dev.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-13b-0.9.7-dev.yaml)|[ltxv-13b-i2v-base.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/ltxv-13b-i2v-base.json)|
132
+
|[ltxv-13b-0.9.7-mix](https://app.ltx.studio/motion-workspace?videoModel=ltxv-13b)| Mix ltxv-13b-dev and ltxv-13b-distilled in the same multi-scale rendering workflow for balanced speed-quality | N/A |[ltxv-13b-i2v-mix.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/ltxv13b-i2v-mixed-multiscale.json)|
133
+
[ltxv-13b-0.9.7-distilled](https://app.ltx.studio/motion-workspace?videoModel=ltxv) | Faster, less VRAM usage, slight quality reduction compared to 13b. Ideal for rapid iterations | [ltxv-13b-0.9.7-distilled.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-13b-0.9.7-dev.yaml) | [ltxv-13b-dist-i2v-base.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/13b-distilled/ltxv-13b-dist-i2v-base.json) |
134
+
|[ltxv-13b-0.9.7-distilled-lora128](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled-lora128.safetensors)| LoRA to make ltxv-13b-dev behave like the distilled model | N/A | N/A |
135
+
| ltxv-13b-0.9.7-fp8 | Quantized version of ltxv-13b | Coming soon |[ltxv-13b-i2v-base-fp8.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/ltxv-13b-i2v-base-fp8.json)|
136
+
| ltxv-13b-0.9.7-distilled-fp8 | Quantized version of ltxv-13b-distilled | Coming soon |[ltxv-13b-dist-fp8-i2v-base.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/13b-distilled/ltxv-13b-dist-fp8-i2v-base.json)|
137
+
| ltxv-2b-0.9.6 | Good quality, lower VRAM requirement than ltxv-13b |[ltxv-2b-0.9.6-dev.yaml](https://github.com/Lightricks/LTX-Video/blob/main/configs/ltxv-2b-0.9.6-dev.yaml)|[ltxvideo-i2v.json](https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/example_workflows/low_level/ltxvideo-i2v.json)|
#### For video generation with multiple conditions:
@@ -182,7 +197,7 @@ You can now generate a video conditioned on a set of images and/or short video s
182
197
Simply provide a list of paths to the images or video segments you want to condition on, along with their target frame numbers in the generated video. You can also specify the conditioning strength for each item (default: 1.0).
@@ -268,8 +283,8 @@ please let us know by opening an issue or pull request.
268
283
269
284
# ⚡️ Training
270
285
271
-
We provide an open-source repository for fine-tuning the LTX-Video model: [LTX-Video-Trainer](https://github.com/Lightricks/LTX-Video-Trainer).
272
-
This repository supports both the 2B and 13B model variants, enabling full fine-tuning as well as LoRA (Low-Rank Adaptation) fine-tuning for more efficient training.
286
+
We provide an open-source repository for fine-tuning the LTX-Video model: [LTX-Video-Trainer](https://github.com/Lightricks/LTX-Video-Trainer).
287
+
This repository supports both the 2B and 13B model variants, enabling full fine-tuning as well as LoRA (Low-Rank Adaptation) fine-tuning for more efficient training.
273
288
274
289
Explore the repository to customize the model for your specific use cases!
275
290
More information and training instructions can be found in the [README](https://github.com/Lightricks/LTX-Video-Trainer/blob/main/README.md).
0 commit comments