Skip to content

Commit ada8109

Browse files
authored
Fix SVD doc (#5983)
fix url
1 parent b34acbd commit ada8109

File tree

1 file changed

+7
-5
lines changed
  • docs/source/en/using-diffusers

1 file changed

+7
-5
lines changed

docs/source/en/using-diffusers/svd.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ export_to_video(frames, "generated.mp4", fps=7)
5454
```
5555

5656
<video width="1024" height="576" controls>
57-
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket_generated.mp4?download=true" type="video/mp4">
57+
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket_generated.mp4" type="video/mp4">
5858
</video>
5959

6060
<Tip>
@@ -82,8 +82,9 @@ Video generation is very memory intensive as we have to essentially generate `nu
8282
- enable feed-forward chunking: The feed-forward layer runs in a loop instead of running with a single huge feed-forward batch size
8383
- reduce `decode_chunk_size`: This means that the VAE decodes frames in chunks instead of decoding them all together. **Note**: In addition to leading to a small slowdown, this method also slightly leads to video quality deterioration
8484

85-
You can enable them as follows:
86-
```diff
85+
You can enable them as follows:
86+
87+
```diff
8788
-pipe.enable_model_cpu_offload()
8889
-frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0]
8990
+pipe.enable_model_cpu_offload()
@@ -105,14 +106,15 @@ It accepts the following arguments:
105106

106107
Here is an example of using micro-conditioning to generate a video with more motion.
107108

109+
108110
```python
109111
import torch
110112

111113
from diffusers import StableVideoDiffusionPipeline
112114
from diffusers.utils import load_image, export_to_video
113115

114116
pipe = StableVideoDiffusionPipeline.from_pretrained(
115-
"stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16"
117+
"stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16"
116118
)
117119
pipe.enable_model_cpu_offload()
118120

@@ -126,6 +128,6 @@ export_to_video(frames, "generated.mp4", fps=7)
126128
```
127129

128130
<video width="1024" height="576" controls>
129-
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket_generated_motion.mp4?download=true" type="video/mp4">
131+
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket_generated_motion.mp4" type="video/mp4">
130132
</video>
131133

0 commit comments

Comments
 (0)