Skip to content

Commit 833fe67

Browse files
authored
Merge branch 'main' into sana-lora
2 parents 906f2f0 + 8957324 commit 833fe67

File tree

2 files changed

+20
-7
lines changed

2 files changed

+20
-7
lines changed

.github/workflows/push_tests.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -165,7 +165,8 @@ jobs:
165165
group: gcp-ct5lp-hightpu-8t
166166
container:
167167
image: diffusers/diffusers-flax-tpu
168-
options: --shm-size "16gb" --ipc host --privileged ${{ vars.V5_LITEPOD_8_ENV}} -v /mnt/hf_cache:/mnt/hf_cache defaults:
168+
options: --shm-size "16gb" --ipc host --privileged ${{ vars.V5_LITEPOD_8_ENV}} -v /mnt/hf_cache:/mnt/hf_cache
169+
defaults:
169170
run:
170171
shell: bash
171172
steps:

docs/source/en/api/pipelines/ltx_video.md

Lines changed: 18 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -31,26 +31,38 @@ import torch
3131
from diffusers import AutoencoderKLLTXVideo, LTXImageToVideoPipeline, LTXVideoTransformer3DModel
3232

3333
single_file_url = "https://huggingface.co/Lightricks/LTX-Video/ltx-video-2b-v0.9.safetensors"
34-
transformer = LTXVideoTransformer3DModel.from_single_file(single_file_url, torch_dtype=torch.bfloat16)
34+
transformer = LTXVideoTransformer3DModel.from_single_file(
35+
single_file_url, torch_dtype=torch.bfloat16
36+
)
3537
vae = AutoencoderKLLTXVideo.from_single_file(single_file_url, torch_dtype=torch.bfloat16)
36-
pipe = LTXImageToVideoPipeline.from_pretrained("Lightricks/LTX-Video", transformer=transformer, vae=vae, torch_dtype=torch.bfloat16)
38+
pipe = LTXImageToVideoPipeline.from_pretrained(
39+
"Lightricks/LTX-Video", transformer=transformer, vae=vae, torch_dtype=torch.bfloat16
40+
)
3741

3842
# ... inference code ...
3943
```
4044

41-
Alternatively, the pipeline can be used to load the weights with [~FromSingleFileMixin.from_single_file`].
45+
Alternatively, the pipeline can be used to load the weights with [`~FromSingleFileMixin.from_single_file`].
4246

4347
```python
4448
import torch
4549
from diffusers import LTXImageToVideoPipeline
4650
from transformers import T5EncoderModel, T5Tokenizer
4751

4852
single_file_url = "https://huggingface.co/Lightricks/LTX-Video/ltx-video-2b-v0.9.safetensors"
49-
text_encoder = T5EncoderModel.from_pretrained("Lightricks/LTX-Video", subfolder="text_encoder", torch_dtype=torch.bfloat16)
50-
tokenizer = T5Tokenizer.from_pretrained("Lightricks/LTX-Video", subfolder="tokenizer", torch_dtype=torch.bfloat16)
51-
pipe = LTXImageToVideoPipeline.from_single_file(single_file_url, text_encoder=text_encoder, tokenizer=tokenizer, torch_dtype=torch.bfloat16)
53+
text_encoder = T5EncoderModel.from_pretrained(
54+
"Lightricks/LTX-Video", subfolder="text_encoder", torch_dtype=torch.bfloat16
55+
)
56+
tokenizer = T5Tokenizer.from_pretrained(
57+
"Lightricks/LTX-Video", subfolder="tokenizer", torch_dtype=torch.bfloat16
58+
)
59+
pipe = LTXImageToVideoPipeline.from_single_file(
60+
single_file_url, text_encoder=text_encoder, tokenizer=tokenizer, torch_dtype=torch.bfloat16
61+
)
5262
```
5363

64+
Refer to [this section](https://huggingface.co/docs/diffusers/main/en/api/pipelines/cogvideox#memory-optimization) to learn more about optimizing memory consumption.
65+
5466
## LTXPipeline
5567

5668
[[autodoc]] LTXPipeline

0 commit comments

Comments
 (0)