Skip to content

Commit 462d2f4

Browse files
authored
Merge branch 'main' into refactor-rope-and-sincos
2 parents 4d60d14 + d9029f2 commit 462d2f4

26 files changed

+2017
-105
lines changed

docs/source/en/_toctree.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,8 @@
7575
title: Outpainting
7676
title: Advanced inference
7777
- sections:
78+
- local: using-diffusers/cogvideox
79+
title: CogVideoX
7880
- local: using-diffusers/sdxl
7981
title: Stable Diffusion XL
8082
- local: using-diffusers/sdxl_turbo
@@ -129,6 +131,8 @@
129131
title: T2I-Adapters
130132
- local: training/instructpix2pix
131133
title: InstructPix2Pix
134+
- local: training/cogvideox
135+
title: CogVideoX
132136
title: Models
133137
- isExpanded: false
134138
sections:

docs/source/en/api/pipelines/cogvideox.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,10 @@ There are two models available that can be used with the text-to-video and video
3636
There is one model available that can be used with the image-to-video CogVideoX pipeline:
3737
- [`THUDM/CogVideoX-5b-I2V`](https://huggingface.co/THUDM/CogVideoX-5b-I2V): The recommended dtype for running this model is `bf16`.
3838

39+
There are two models that support pose controllable generation (by the [Alibaba-PAI](https://huggingface.co/alibaba-pai) team):
40+
- [`alibaba-pai/CogVideoX-Fun-V1.1-2b-Pose`](https://huggingface.co/alibaba-pai/CogVideoX-Fun-V1.1-2b-Pose): The recommended dtype for running this model is `bf16`.
41+
- [`alibaba-pai/CogVideoX-Fun-V1.1-5b-Pose`](https://huggingface.co/alibaba-pai/CogVideoX-Fun-V1.1-5b-Pose): The recommended dtype for running this model is `bf16`.
42+
3943
## Inference
4044

4145
Use [`torch.compile`](https://huggingface.co/docs/diffusers/main/en/tutorials/fast_diffusion#torchcompile) to reduce the inference latency.
@@ -118,6 +122,12 @@ It is also worth noting that torchao quantization is fully compatible with [torc
118122
- all
119123
- __call__
120124

125+
## CogVideoXFunControlPipeline
126+
127+
[[autodoc]] CogVideoXFunControlPipeline
128+
- all
129+
- __call__
130+
121131
## CogVideoXPipelineOutput
122132

123133
[[autodoc]] pipelines.cogvideo.pipeline_output.CogVideoXPipelineOutput

docs/source/en/training/cogvideox.md

Lines changed: 291 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 120 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,120 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
# CogVideoX
13+
14+
CogVideoX is a text-to-video generation model focused on creating more coherent videos aligned with a prompt. It achieves this using several methods.
15+
16+
- a 3D variational autoencoder that compresses videos spatially and temporally, improving compression rate and video accuracy.
17+
18+
- an expert transformer block to help align text and video, and a 3D full attention module for capturing and creating spatially and temporally accurate videos.
19+
20+
21+
22+
## Load model checkpoints
23+
Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the [`~DiffusionPipeline.from_pretrained`] method.
24+
25+
26+
```py
27+
from diffusers import CogVideoXPipeline, CogVideoXImageToVideoPipeline
28+
pipe = CogVideoXPipeline.from_pretrained(
29+
"THUDM/CogVideoX-2b",
30+
torch_dtype=torch.float16
31+
)
32+
33+
pipe = CogVideoXImageToVideoPipeline.from_pretrained(
34+
"THUDM/CogVideoX-5b-I2V",
35+
torch_dtype=torch.bfloat16
36+
)
37+
38+
```
39+
40+
## Text-to-Video
41+
For text-to-video, pass a text prompt. By default, CogVideoX generates a 720x480 video for the best results.
42+
43+
```py
44+
import torch
45+
from diffusers import CogVideoXPipeline
46+
from diffusers.utils import export_to_video
47+
48+
prompt = "An elderly gentleman, with a serene expression, sits at the water's edge, a steaming cup of tea by his side. He is engrossed in his artwork, brush in hand, as he renders an oil painting on a canvas that's propped up against a small, weathered table. The sea breeze whispers through his silver hair, gently billowing his loose-fitting white shirt, while the salty air adds an intangible element to his masterpiece in progress. The scene is one of tranquility and inspiration, with the artist's canvas capturing the vibrant hues of the setting sun reflecting off the tranquil sea."
49+
50+
pipe = CogVideoXPipeline.from_pretrained(
51+
"THUDM/CogVideoX-5b",
52+
torch_dtype=torch.bfloat16
53+
)
54+
55+
pipe.enable_model_cpu_offload()
56+
pipe.vae.enable_tiling()
57+
58+
video = pipe(
59+
prompt=prompt,
60+
num_videos_per_prompt=1,
61+
num_inference_steps=50,
62+
num_frames=49,
63+
guidance_scale=6,
64+
generator=torch.Generator(device="cuda").manual_seed(42),
65+
).frames[0]
66+
67+
export_to_video(video, "output.mp4", fps=8)
68+
69+
```
70+
71+
72+
<div class="flex justify-center">
73+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cogvideox/cogvideox_out.gif" alt="generated image of an astronaut in a jungle"/>
74+
</div>
75+
76+
77+
## Image-to-Video
78+
79+
80+
You'll use the [THUDM/CogVideoX-5b-I2V](https://huggingface.co/THUDM/CogVideoX-5b-I2V) checkpoint for this guide.
81+
82+
```py
83+
import torch
84+
from diffusers import CogVideoXImageToVideoPipeline
85+
from diffusers.utils import export_to_video, load_image
86+
87+
prompt = "A vast, shimmering ocean flows gracefully under a twilight sky, its waves undulating in a mesmerizing dance of blues and greens. The surface glints with the last rays of the setting sun, casting golden highlights that ripple across the water. Seagulls soar above, their cries blending with the gentle roar of the waves. The horizon stretches infinitely, where the ocean meets the sky in a seamless blend of hues. Close-ups reveal the intricate patterns of the waves, capturing the fluidity and dynamic beauty of the sea in motion."
88+
image = load_image(image="cogvideox_rocket.png")
89+
pipe = CogVideoXImageToVideoPipeline.from_pretrained(
90+
"THUDM/CogVideoX-5b-I2V",
91+
torch_dtype=torch.bfloat16
92+
)
93+
94+
pipe.vae.enable_tiling()
95+
pipe.vae.enable_slicing()
96+
97+
video = pipe(
98+
prompt=prompt,
99+
image=image,
100+
num_videos_per_prompt=1,
101+
num_inference_steps=50,
102+
num_frames=49,
103+
guidance_scale=6,
104+
generator=torch.Generator(device="cuda").manual_seed(42),
105+
).frames[0]
106+
107+
export_to_video(video, "output.mp4", fps=8)
108+
```
109+
110+
<div class="flex gap-4">
111+
<div>
112+
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cogvideox/cogvideox_rocket.png"/>
113+
<figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
114+
</div>
115+
<div>
116+
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cogvideox/cogvideox_outrocket.gif"/>
117+
<figcaption class="mt-2 text-center text-sm text-gray-500">generated video</figcaption>
118+
</div>
119+
</div>
120+

docs/source/en/using-diffusers/text-img2vid.md

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,59 @@ This guide will show you how to generate videos, how to configure video model pa
2323
2424
[Stable Video Diffusions (SVD)](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid), [I2VGen-XL](https://huggingface.co/ali-vilab/i2vgen-xl/), [AnimateDiff](https://huggingface.co/guoyww/animatediff), and [ModelScopeT2V](https://huggingface.co/ali-vilab/text-to-video-ms-1.7b) are popular models used for video diffusion. Each model is distinct. For example, AnimateDiff inserts a motion modeling module into a frozen text-to-image model to generate personalized animated images, whereas SVD is entirely pretrained from scratch with a three-stage training process to generate short high-quality videos.
2525

26+
[CogVideoX](https://huggingface.co/collections/THUDM/cogvideo-66c08e62f1685a3ade464cce) is another popular video generation model. The model is a multidimensional transformer that integrates text, time, and space. It employs full attention in the attention module and includes an expert block at the layer level to spatially align text and video.
27+
28+
### CogVideoX
29+
30+
[CogVideoX](../api/pipelines/cogvideox) uses a 3D Variational Autoencoder (VAE) to compress videos along the spatial and temporal dimensions.
31+
32+
Begin by loading the [`CogVideoXPipeline`] and passing an initial text or image to generate a video.
33+
<Tip>
34+
35+
CogVideoX is available for image-to-video and text-to-video. [THUDM/CogVideoX-5b-I2V](https://huggingface.co/THUDM/CogVideoX-5b-I2V) uses the [`CogVideoXImageToVideoPipeline`] for image-to-video. [THUDM/CogVideoX-5b](https://huggingface.co/THUDM/CogVideoX-5b) and [THUDM/CogVideoX-2b](https://huggingface.co/THUDM/CogVideoX-2b) are available for text-to-video with the [`CogVideoXPipeline`].
36+
37+
</Tip>
38+
39+
```py
40+
import torch
41+
from diffusers import CogVideoXImageToVideoPipeline
42+
from diffusers.utils import export_to_video, load_image
43+
44+
prompt = "A vast, shimmering ocean flows gracefully under a twilight sky, its waves undulating in a mesmerizing dance of blues and greens. The surface glints with the last rays of the setting sun, casting golden highlights that ripple across the water. Seagulls soar above, their cries blending with the gentle roar of the waves. The horizon stretches infinitely, where the ocean meets the sky in a seamless blend of hues. Close-ups reveal the intricate patterns of the waves, capturing the fluidity and dynamic beauty of the sea in motion."
45+
image = load_image(image="cogvideox_rocket.png")
46+
pipe = CogVideoXImageToVideoPipeline.from_pretrained(
47+
"THUDM/CogVideoX-5b-I2V",
48+
torch_dtype=torch.bfloat16
49+
)
50+
51+
pipe.vae.enable_tiling()
52+
pipe.vae.enable_slicing()
53+
54+
video = pipe(
55+
prompt=prompt,
56+
image=image,
57+
num_videos_per_prompt=1,
58+
num_inference_steps=50,
59+
num_frames=49,
60+
guidance_scale=6,
61+
generator=torch.Generator(device="cuda").manual_seed(42),
62+
).frames[0]
63+
64+
export_to_video(video, "output.mp4", fps=8)
65+
```
66+
67+
<div class="flex gap-4">
68+
<div>
69+
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cogvideox/cogvideox_rocket.png"/>
70+
<figcaption class="mt-2 text-center text-sm text-gray-500">initial image</figcaption>
71+
</div>
72+
<div>
73+
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cogvideox/cogvideox_outrocket.gif"/>
74+
<figcaption class="mt-2 text-center text-sm text-gray-500">generated video</figcaption>
75+
</div>
76+
</div>
77+
78+
2679
### Stable Video Diffusion
2780

2881
[SVD](../api/pipelines/svd) is based on the Stable Diffusion 2.1 model and it is trained on images, then low-resolution videos, and finally a smaller dataset of high-resolution videos. This model generates a short 2-4 second video from an initial image. You can learn more details about model, like micro-conditioning, in the [Stable Video Diffusion](../using-diffusers/svd) guide.

examples/dreambooth/README_sd3.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@ accelerate launch train_dreambooth_lora_sd3.py \
136136
--resolution=512 \
137137
--train_batch_size=1 \
138138
--gradient_accumulation_steps=4 \
139-
--learning_rate=1e-5 \
139+
--learning_rate=4e-4 \
140140
--report_to="wandb" \
141141
--lr_scheduler="constant" \
142142
--lr_warmup_steps=0 \

examples/dreambooth/test_dreambooth_lora_sd3.py

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -103,6 +103,39 @@ def test_dreambooth_lora_text_encoder_sd3(self):
103103
)
104104
self.assertTrue(starts_with_expected_prefix)
105105

106+
def test_dreambooth_lora_latent_caching(self):
107+
with tempfile.TemporaryDirectory() as tmpdir:
108+
test_args = f"""
109+
{self.script_path}
110+
--pretrained_model_name_or_path {self.pretrained_model_name_or_path}
111+
--instance_data_dir {self.instance_data_dir}
112+
--instance_prompt {self.instance_prompt}
113+
--resolution 64
114+
--train_batch_size 1
115+
--gradient_accumulation_steps 1
116+
--max_train_steps 2
117+
--cache_latents
118+
--learning_rate 5.0e-04
119+
--scale_lr
120+
--lr_scheduler constant
121+
--lr_warmup_steps 0
122+
--output_dir {tmpdir}
123+
""".split()
124+
125+
run_command(self._launch_args + test_args)
126+
# save_pretrained smoke test
127+
self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.safetensors")))
128+
129+
# make sure the state_dict has the correct naming in the parameters.
130+
lora_state_dict = safetensors.torch.load_file(os.path.join(tmpdir, "pytorch_lora_weights.safetensors"))
131+
is_lora = all("lora" in k for k in lora_state_dict.keys())
132+
self.assertTrue(is_lora)
133+
134+
# when not training the text encoder, all the parameters in the state dict should start
135+
# with `"transformer"` in their names.
136+
starts_with_transformer = all(key.startswith("transformer") for key in lora_state_dict.keys())
137+
self.assertTrue(starts_with_transformer)
138+
106139
def test_dreambooth_lora_sd3_checkpointing_checkpoints_total_limit(self):
107140
with tempfile.TemporaryDirectory() as tmpdir:
108141
test_args = f"""

0 commit comments

Comments
 (0)