Skip to content

Commit b5af8bb

Browse files
committed
update
1 parent fe87be6 commit b5af8bb

File tree

1 file changed

+18
-2
lines changed

1 file changed

+18
-2
lines changed

docs/source/en/using-diffusers/text-img2vid.md

Lines changed: 18 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,6 @@ video = pipe(
5050
guidance_scale=6,
5151
generator=torch.Generator(device="cuda").manual_seed(42),
5252
).frames[0]
53-
5453
export_to_video(video, "output.mp4", fps=8)
5554
```
5655

@@ -94,6 +93,10 @@ video = pipe(
9493
export_to_video(video, "output.mp4", fps=15)
9594
```
9695

96+
<div class="flex justify-center">
97+
<img src="https://huggingface.co/Lightricks/LTX-Video/resolve/main/media/ltx-video_example_00014.gif"/>
98+
</div>
99+
97100
</hfoption>
98101
<hfoption id="LTX-Video">
99102

@@ -117,6 +120,10 @@ video = pipe(
117120
export_to_video(video, "output.mp4", fps=24)
118121
```
119122

123+
<div class="flex justify-center">
124+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/hunyuan-video-output.gif"/>
125+
</div>
126+
120127
</hfoption>
121128
<hfoption id="Mochi-1">
122129

@@ -135,10 +142,13 @@ pipe.enable_vae_tiling()
135142

136143
prompt = "Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k."
137144
video = pipe(prompt, num_frames=84).frames[0]
138-
139145
export_to_video(video, "output.mp4", fps=30)
140146
```
141147

148+
<div class="flex justify-center">
149+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/mochi-video-output.gif"/>
150+
</div>
151+
142152
</hfoption>
143153
</hfoptions>
144154

@@ -456,3 +466,9 @@ If memory is not an issue and you want to optimize for speed, try wrapping the U
456466
+ pipeline.to("cuda")
457467
+ pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True)
458468
```
469+
470+
## Quantization
471+
472+
Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.
473+
474+
Refer to the [Quantization](../../quantization/overview) to learn more about supported quantization backends (bitsandbytes, torchao, gguf) and selecting a quantization backend that supports your use case.

0 commit comments

Comments
 (0)