Skip to content

Commit 85fc267

Browse files
committed
Merge branch 'main' into integrations/hunyuan-video-i2v-new
2 parents af24bea + fc28791 commit 85fc267

File tree

80 files changed

+5402
-545
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

80 files changed

+5402
-545
lines changed

.github/workflows/benchmark.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,7 @@ jobs:
3838
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
3939
python -m uv pip install -e [quality,test]
4040
python -m uv pip install pandas peft
41+
python -m uv pip uninstall transformers && python -m uv pip install transformers==4.48.0
4142
- name: Environment
4243
run: |
4344
python utils/print_env.py

.github/workflows/nightly_tests.yml

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -414,12 +414,16 @@ jobs:
414414
config:
415415
- backend: "bitsandbytes"
416416
test_location: "bnb"
417+
additional_deps: ["peft"]
417418
- backend: "gguf"
418419
test_location: "gguf"
420+
additional_deps: []
419421
- backend: "torchao"
420422
test_location: "torchao"
423+
additional_deps: []
421424
- backend: "optimum_quanto"
422425
test_location: "quanto"
426+
additional_deps: []
423427
runs-on:
424428
group: aws-g6e-xlarge-plus
425429
container:
@@ -437,6 +441,9 @@ jobs:
437441
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
438442
python -m uv pip install -e [quality,test]
439443
python -m uv pip install -U ${{ matrix.config.backend }}
444+
if [ "${{ join(matrix.config.additional_deps, ' ') }}" != "" ]; then
445+
python -m uv pip install ${{ join(matrix.config.additional_deps, ' ') }}
446+
fi
440447
python -m uv pip install pytest-reportlog
441448
- name: Environment
442449
run: |

.github/workflows/pr_tests_gpu.yml

Lines changed: 47 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,51 @@ env:
2828
PIPELINE_USAGE_CUTOFF: 1000000000 # set high cutoff so that only always-test pipelines run
2929

3030
jobs:
31+
check_code_quality:
32+
runs-on: ubuntu-22.04
33+
steps:
34+
- uses: actions/checkout@v3
35+
- name: Set up Python
36+
uses: actions/setup-python@v4
37+
with:
38+
python-version: "3.8"
39+
- name: Install dependencies
40+
run: |
41+
python -m pip install --upgrade pip
42+
pip install .[quality]
43+
- name: Check quality
44+
run: make quality
45+
- name: Check if failure
46+
if: ${{ failure() }}
47+
run: |
48+
echo "Quality check failed. Please ensure the right dependency versions are installed with 'pip install -e .[quality]' and run 'make style && make quality'" >> $GITHUB_STEP_SUMMARY
49+
50+
check_repository_consistency:
51+
needs: check_code_quality
52+
runs-on: ubuntu-22.04
53+
steps:
54+
- uses: actions/checkout@v3
55+
- name: Set up Python
56+
uses: actions/setup-python@v4
57+
with:
58+
python-version: "3.8"
59+
- name: Install dependencies
60+
run: |
61+
python -m pip install --upgrade pip
62+
pip install .[quality]
63+
- name: Check repo consistency
64+
run: |
65+
python utils/check_copies.py
66+
python utils/check_dummies.py
67+
python utils/check_support_list.py
68+
make deps_table_check_updated
69+
- name: Check if failure
70+
if: ${{ failure() }}
71+
run: |
72+
echo "Repo consistency check failed. Please ensure the right dependency versions are installed with 'pip install -e .[quality]' and run 'make fix-copies'" >> $GITHUB_STEP_SUMMARY
73+
3174
setup_torch_cuda_pipeline_matrix:
75+
needs: [check_code_quality, check_repository_consistency]
3276
name: Setup Torch Pipelines CUDA Slow Tests Matrix
3377
runs-on:
3478
group: aws-general-8-plus
@@ -133,6 +177,7 @@ jobs:
133177

134178
torch_cuda_tests:
135179
name: Torch CUDA Tests
180+
needs: [check_code_quality, check_repository_consistency]
136181
runs-on:
137182
group: aws-g4dn-2xlarge
138183
container:
@@ -201,7 +246,7 @@ jobs:
201246

202247
run_examples_tests:
203248
name: Examples PyTorch CUDA tests on Ubuntu
204-
pip uninstall transformers -y && python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
249+
needs: [check_code_quality, check_repository_consistency]
205250
runs-on:
206251
group: aws-g4dn-2xlarge
207252

@@ -220,6 +265,7 @@ jobs:
220265
- name: Install dependencies
221266
run: |
222267
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
268+
pip uninstall transformers -y && python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
223269
python -m uv pip install -e [quality,test,training]
224270
225271
- name: Environment

docs/source/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -81,6 +81,8 @@
8181
title: Overview
8282
- local: hybrid_inference/vae_decode
8383
title: VAE Decode
84+
- local: hybrid_inference/vae_encode
85+
title: VAE Encode
8486
- local: hybrid_inference/api_reference
8587
title: API Reference
8688
title: Hybrid Inference

docs/source/en/api/pipelines/ltx_video.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -196,6 +196,12 @@ export_to_video(video, "ship.mp4", fps=24)
196196
- all
197197
- __call__
198198

199+
## LTXConditionPipeline
200+
201+
[[autodoc]] LTXConditionPipeline
202+
- all
203+
- __call__
204+
199205
## LTXPipelineOutput
200206

201207
[[autodoc]] pipelines.ltx.pipeline_output.LTXPipelineOutput

docs/source/en/api/pipelines/lumina.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -58,10 +58,10 @@ Use [`torch.compile`](https://huggingface.co/docs/diffusers/main/en/tutorials/fa
5858
First, load the pipeline:
5959

6060
```python
61-
from diffusers import LuminaText2ImgPipeline
61+
from diffusers import LuminaPipeline
6262
import torch
6363

64-
pipeline = LuminaText2ImgPipeline.from_pretrained(
64+
pipeline = LuminaPipeline.from_pretrained(
6565
"Alpha-VLLM/Lumina-Next-SFT-diffusers", torch_dtype=torch.bfloat16
6666
).to("cuda")
6767
```
@@ -86,11 +86,11 @@ image = pipeline(prompt="Upper body of a young woman in a Victorian-era outfit w
8686

8787
Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.
8888

89-
Refer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [`LuminaText2ImgPipeline`] for inference with bitsandbytes.
89+
Refer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [`LuminaPipeline`] for inference with bitsandbytes.
9090

9191
```py
9292
import torch
93-
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, Transformer2DModel, LuminaText2ImgPipeline
93+
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, Transformer2DModel, LuminaPipeline
9494
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, T5EncoderModel
9595

9696
quant_config = BitsAndBytesConfig(load_in_8bit=True)
@@ -109,7 +109,7 @@ transformer_8bit = Transformer2DModel.from_pretrained(
109109
torch_dtype=torch.float16,
110110
)
111111

112-
pipeline = LuminaText2ImgPipeline.from_pretrained(
112+
pipeline = LuminaPipeline.from_pretrained(
113113
"Alpha-VLLM/Lumina-Next-SFT-diffusers",
114114
text_encoder=text_encoder_8bit,
115115
transformer=transformer_8bit,
@@ -122,9 +122,9 @@ image = pipeline(prompt).images[0]
122122
image.save("lumina.png")
123123
```
124124

125-
## LuminaText2ImgPipeline
125+
## LuminaPipeline
126126

127-
[[autodoc]] LuminaText2ImgPipeline
127+
[[autodoc]] LuminaPipeline
128128
- all
129129
- __call__
130130

docs/source/en/api/pipelines/lumina2.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -36,14 +36,14 @@ Single file loading for Lumina Image 2.0 is available for the `Lumina2Transforme
3636

3737
```python
3838
import torch
39-
from diffusers import Lumina2Transformer2DModel, Lumina2Text2ImgPipeline
39+
from diffusers import Lumina2Transformer2DModel, Lumina2Pipeline
4040

4141
ckpt_path = "https://huggingface.co/Alpha-VLLM/Lumina-Image-2.0/blob/main/consolidated.00-of-01.pth"
4242
transformer = Lumina2Transformer2DModel.from_single_file(
4343
ckpt_path, torch_dtype=torch.bfloat16
4444
)
4545

46-
pipe = Lumina2Text2ImgPipeline.from_pretrained(
46+
pipe = Lumina2Pipeline.from_pretrained(
4747
"Alpha-VLLM/Lumina-Image-2.0", transformer=transformer, torch_dtype=torch.bfloat16
4848
)
4949
pipe.enable_model_cpu_offload()
@@ -60,7 +60,7 @@ image.save("lumina-single-file.png")
6060
GGUF Quantized checkpoints for the `Lumina2Transformer2DModel` can be loaded via `from_single_file` with the `GGUFQuantizationConfig`
6161

6262
```python
63-
from diffusers import Lumina2Transformer2DModel, Lumina2Text2ImgPipeline, GGUFQuantizationConfig
63+
from diffusers import Lumina2Transformer2DModel, Lumina2Pipeline, GGUFQuantizationConfig
6464

6565
ckpt_path = "https://huggingface.co/calcuis/lumina-gguf/blob/main/lumina2-q4_0.gguf"
6666
transformer = Lumina2Transformer2DModel.from_single_file(
@@ -69,7 +69,7 @@ transformer = Lumina2Transformer2DModel.from_single_file(
6969
torch_dtype=torch.bfloat16,
7070
)
7171

72-
pipe = Lumina2Text2ImgPipeline.from_pretrained(
72+
pipe = Lumina2Pipeline.from_pretrained(
7373
"Alpha-VLLM/Lumina-Image-2.0", transformer=transformer, torch_dtype=torch.bfloat16
7474
)
7575
pipe.enable_model_cpu_offload()
@@ -80,8 +80,8 @@ image = pipe(
8080
image.save("lumina-gguf.png")
8181
```
8282

83-
## Lumina2Text2ImgPipeline
83+
## Lumina2Pipeline
8484

85-
[[autodoc]] Lumina2Text2ImgPipeline
85+
[[autodoc]] Lumina2Pipeline
8686
- all
8787
- __call__

docs/source/en/hybrid_inference/api_reference.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,3 +3,7 @@
33
## Remote Decode
44

55
[[autodoc]] utils.remote_utils.remote_decode
6+
7+
## Remote Encode
8+
9+
[[autodoc]] utils.remote_utils.remote_encode

docs/source/en/hybrid_inference/overview.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Hybrid Inference offers a fast and simple way to offload local generation requir
3636
## Available Models
3737

3838
* **VAE Decode 🖼️:** Quickly decode latent representations into high-quality images without compromising performance or workflow speed.
39-
* **VAE Encode 🔢 (coming soon):** Efficiently encode images into latent representations for generation and training.
39+
* **VAE Encode 🔢:** Efficiently encode images into latent representations for generation and training.
4040
* **Text Encoders 📃 (coming soon):** Compute text embeddings for your prompts quickly and accurately, ensuring a smooth and high-quality workflow.
4141

4242
---
@@ -46,9 +46,15 @@ Hybrid Inference offers a fast and simple way to offload local generation requir
4646
* **[SD.Next](https://github.com/vladmandic/sdnext):** All-in-one UI with direct supports Hybrid Inference.
4747
* **[ComfyUI-HFRemoteVae](https://github.com/kijai/ComfyUI-HFRemoteVae):** ComfyUI node for Hybrid Inference.
4848

49+
## Changelog
50+
51+
- March 10 2025: Added VAE encode
52+
- March 2 2025: Initial release with VAE decoding
53+
4954
## Contents
5055

51-
The documentation is organized into two sections:
56+
The documentation is organized into three sections:
5257

5358
* **VAE Decode** Learn the basics of how to use VAE Decode with Hybrid Inference.
59+
* **VAE Encode** Learn the basics of how to use VAE Encode with Hybrid Inference.
5460
* **API Reference** Dive into task-specific settings and parameters.

0 commit comments

Comments
 (0)