Skip to content

Commit 4851224

Browse files
committed
use previous checkpoint
1 parent 3854917 commit 4851224

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

docs/source/en/using-diffusers/text-img2vid.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
22
33
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
44
the License. You may obtain a copy of the License at
@@ -12,7 +12,7 @@ specific language governing permissions and limitations under the License.
1212

1313
# Video generation
1414

15-
Video generation models add a temporal dimension to bring images, or frames, together to create a video. These models are trained on large-scale datasets of high-quality text-video pairs to learn how to combine the modalities to ensure the generated video is coherent and realistic.
15+
Video generation models include a temporal dimension to bring images, or frames, together to create a video. These models are trained on large-scale datasets of high-quality text-video pairs to learn how to combine the modalities to ensure the generated video is coherent and realistic.
1616

1717
[Explore](https://huggingface.co/models?other=video-generation) some of the more popular open-source video generation models available from Diffusers below.
1818

@@ -23,7 +23,7 @@ Video generation models add a temporal dimension to bring images, or frames, tog
2323

2424
The CogVideoX family also includes models capable of generating videos from images and videos in addition to text. The image-to-video models are indicated by **I2V** in the checkpoint name, and they should be used with the [`CogVideoXImageToVideoPipeline`]. The regular checkpoints support video-to-video through the [`CogVideoXVideoToVideoPipeline`].
2525

26-
The example below demonstrates how to generate a video from an image and text prompt with [THUDM/CogVideoX1.5-5B-I2V](https://huggingface.co/THUDM/CogVideoX1.5-5B-I2V).
26+
The example below demonstrates how to generate a video from an image and text prompt with [THUDM/CogVideoX-5b-I2V](https://huggingface.co/THUDM/CogVideoX-5b-I2V).
2727

2828
```py
2929
import torch
@@ -33,7 +33,7 @@ from diffusers.utils import export_to_video, load_image
3333
prompt = "A vast, shimmering ocean flows gracefully under a twilight sky, its waves undulating in a mesmerizing dance of blues and greens. The surface glints with the last rays of the setting sun, casting golden highlights that ripple across the water. Seagulls soar above, their cries blending with the gentle roar of the waves. The horizon stretches infinitely, where the ocean meets the sky in a seamless blend of hues. Close-ups reveal the intricate patterns of the waves, capturing the fluidity and dynamic beauty of the sea in motion."
3434
image = load_image(image="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cogvideox/cogvideox_rocket.png")
3535
pipe = CogVideoXImageToVideoPipeline.from_pretrained(
36-
"THUDM/CogVideoX1.5-5B-I2V",
36+
"THUDM/CogVideoX-5b-I2V",
3737
torch_dtype=torch.bfloat16
3838
)
3939

0 commit comments

Comments
 (0)