Skip to content

Commit f91cbd1

Browse files
authored
Merge branch 'main' into bug-fix
2 parents 8c1751e + 24c062a commit f91cbd1

File tree

177 files changed

+14396
-1497
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

177 files changed

+14396
-1497
lines changed

.github/workflows/pr_style_bot.yml

Lines changed: 46 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,33 @@ permissions:
99
pull-requests: write
1010

1111
jobs:
12-
run-style-bot:
12+
check-permissions:
1313
if: >
1414
contains(github.event.comment.body, '@bot /style') &&
1515
github.event.issue.pull_request != null
1616
runs-on: ubuntu-latest
17+
outputs:
18+
is_authorized: ${{ steps.check_user_permission.outputs.has_permission }}
19+
steps:
20+
- name: Check user permission
21+
id: check_user_permission
22+
uses: actions/github-script@v6
23+
with:
24+
script: |
25+
const comment_user = context.payload.comment.user.login;
26+
const { data: permission } = await github.rest.repos.getCollaboratorPermissionLevel({
27+
owner: context.repo.owner,
28+
repo: context.repo.repo,
29+
username: comment_user
30+
});
31+
const authorized = permission.permission === 'admin';
32+
console.log(`User ${comment_user} has permission level: ${permission.permission}, authorized: ${authorized} (only admins allowed)`);
33+
core.setOutput('has_permission', authorized);
1734
35+
run-style-bot:
36+
needs: check-permissions
37+
if: needs.check-permissions.outputs.is_authorized == 'true'
38+
runs-on: ubuntu-latest
1839
steps:
1940
- name: Extract PR details
2041
id: pr_info
@@ -64,18 +85,38 @@ jobs:
6485
run: |
6586
pip install .[quality]
6687
67-
- name: Download Makefile from main branch
88+
- name: Download necessary files from main branch of Diffusers
6889
run: |
6990
curl -o main_Makefile https://raw.githubusercontent.com/huggingface/diffusers/main/Makefile
91+
curl -o main_setup.py https://raw.githubusercontent.com/huggingface/diffusers/refs/heads/main/setup.py
92+
curl -o main_check_doc_toc.py https://raw.githubusercontent.com/huggingface/diffusers/refs/heads/main/utils/check_doc_toc.py
7093
71-
- name: Compare Makefiles
94+
- name: Compare the files and raise error if needed
7295
run: |
96+
diff_failed=0
97+
7398
if ! diff -q main_Makefile Makefile; then
7499
echo "Error: The Makefile has changed. Please ensure it matches the main branch."
100+
diff_failed=1
101+
fi
102+
103+
if ! diff -q main_setup.py setup.py; then
104+
echo "Error: The setup.py has changed. Please ensure it matches the main branch."
105+
diff_failed=1
106+
fi
107+
108+
if ! diff -q main_check_doc_toc.py utils/check_doc_toc.py; then
109+
echo "Error: The utils/check_doc_toc.py has changed. Please ensure it matches the main branch."
110+
diff_failed=1
111+
fi
112+
113+
if [ $diff_failed -eq 1 ]; then
114+
echo "❌ Error happened as we detected changes in the files that should not be changed ❌"
75115
exit 1
76116
fi
77-
echo "No changes in Makefile. Proceeding..."
78-
rm -rf main_Makefile
117+
118+
echo "No changes in the files. Proceeding..."
119+
rm -rf main_Makefile main_setup.py main_check_doc_toc.py
79120
80121
- name: Run make style and make quality
81122
run: |

.github/workflows/pr_tests_gpu.yml

Lines changed: 14 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ on:
1111
- "src/diffusers/loaders/lora_base.py"
1212
- "src/diffusers/loaders/lora_pipeline.py"
1313
- "src/diffusers/loaders/peft.py"
14+
- "tests/pipelines/test_pipelines_common.py"
15+
- "tests/models/test_modeling_common.py"
1416
workflow_dispatch:
1517

1618
concurrency:
@@ -104,11 +106,18 @@ jobs:
104106
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
105107
CUBLAS_WORKSPACE_CONFIG: :16:8
106108
run: |
107-
pattern=$(cat ${{ steps.extract_tests.outputs.pattern_file }})
108-
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
109-
-s -v -k "not Flax and not Onnx and $pattern" \
110-
--make-reports=tests_pipeline_${{ matrix.module }}_cuda \
111-
tests/pipelines/${{ matrix.module }}
109+
if [ "${{ matrix.module }}" = "ip_adapters" ]; then
110+
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
111+
-s -v -k "not Flax and not Onnx" \
112+
--make-reports=tests_pipeline_${{ matrix.module }}_cuda \
113+
tests/pipelines/${{ matrix.module }}
114+
else
115+
pattern=$(cat ${{ steps.extract_tests.outputs.pattern_file }})
116+
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
117+
-s -v -k "not Flax and not Onnx and $pattern" \
118+
--make-reports=tests_pipeline_${{ matrix.module }}_cuda \
119+
tests/pipelines/${{ matrix.module }}
120+
fi
112121
113122
- name: Failure short reports
114123
if: ${{ failure() }}

docs/source/en/_toctree.yml

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -76,6 +76,14 @@
7676
- local: advanced_inference/outpaint
7777
title: Outpainting
7878
title: Advanced inference
79+
- sections:
80+
- local: hybrid_inference/overview
81+
title: Overview
82+
- local: hybrid_inference/vae_decode
83+
title: VAE Decode
84+
- local: hybrid_inference/api_reference
85+
title: API Reference
86+
title: Hybrid Inference
7987
- sections:
8088
- local: using-diffusers/cogvideox
8189
title: CogVideoX
@@ -282,6 +290,8 @@
282290
title: CogView4Transformer2DModel
283291
- local: api/models/dit_transformer2d
284292
title: DiTTransformer2DModel
293+
- local: api/models/easyanimate_transformer3d
294+
title: EasyAnimateTransformer3DModel
285295
- local: api/models/flux_transformer
286296
title: FluxTransformer2DModel
287297
- local: api/models/hunyuan_transformer2d
@@ -314,6 +324,8 @@
314324
title: Transformer2DModel
315325
- local: api/models/transformer_temporal
316326
title: TransformerTemporalModel
327+
- local: api/models/wan_transformer_3d
328+
title: WanTransformer3DModel
317329
title: Transformers
318330
- sections:
319331
- local: api/models/stable_cascade_unet
@@ -342,8 +354,12 @@
342354
title: AutoencoderKLHunyuanVideo
343355
- local: api/models/autoencoderkl_ltx_video
344356
title: AutoencoderKLLTXVideo
357+
- local: api/models/autoencoderkl_magvit
358+
title: AutoencoderKLMagvit
345359
- local: api/models/autoencoderkl_mochi
346360
title: AutoencoderKLMochi
361+
- local: api/models/autoencoder_kl_wan
362+
title: AutoencoderKLWan
347363
- local: api/models/asymmetricautoencoderkl
348364
title: AsymmetricAutoencoderKL
349365
- local: api/models/autoencoder_dc
@@ -418,6 +434,8 @@
418434
title: DiffEdit
419435
- local: api/pipelines/dit
420436
title: DiT
437+
- local: api/pipelines/easyanimate
438+
title: EasyAnimate
421439
- local: api/pipelines/flux
422440
title: Flux
423441
- local: api/pipelines/control_flux_inpaint
@@ -534,6 +552,8 @@
534552
title: UniDiffuser
535553
- local: api/pipelines/value_guided_sampling
536554
title: Value-guided sampling
555+
- local: api/pipelines/wan
556+
title: Wan
537557
- local: api/pipelines/wuerstchen
538558
title: Wuerstchen
539559
title: Pipelines
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
<!-- Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License. -->
11+
12+
# AutoencoderKLWan
13+
14+
The 3D variational autoencoder (VAE) model with KL loss used in [Wan 2.1](https://github.com/Wan-Video/Wan2.1) by the Alibaba Wan Team.
15+
16+
The model can be loaded with the following code snippet.
17+
18+
```python
19+
from diffusers import AutoencoderKLWan
20+
21+
vae = AutoencoderKLWan.from_pretrained("Wan-AI/Wan2.1-T2V-1.3B-Diffusers", subfolder="vae", torch_dtype=torch.float32)
22+
```
23+
24+
## AutoencoderKLWan
25+
26+
[[autodoc]] AutoencoderKLWan
27+
- decode
28+
- all
29+
30+
## DecoderOutput
31+
32+
[[autodoc]] models.autoencoders.vae.DecoderOutput
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License. -->
11+
12+
# AutoencoderKLMagvit
13+
14+
The 3D variational autoencoder (VAE) model with KL loss used in [EasyAnimate](https://github.com/aigc-apps/EasyAnimate) was introduced by Alibaba PAI.
15+
16+
The model can be loaded with the following code snippet.
17+
18+
```python
19+
from diffusers import AutoencoderKLMagvit
20+
21+
vae = AutoencoderKLMagvit.from_pretrained("alibaba-pai/EasyAnimateV5.1-12b-zh", subfolder="vae", torch_dtype=torch.float16).to("cuda")
22+
```
23+
24+
## AutoencoderKLMagvit
25+
26+
[[autodoc]] AutoencoderKLMagvit
27+
- decode
28+
- encode
29+
- all
30+
31+
## AutoencoderKLOutput
32+
33+
[[autodoc]] models.autoencoders.autoencoder_kl.AutoencoderKLOutput
34+
35+
## DecoderOutput
36+
37+
[[autodoc]] models.autoencoders.vae.DecoderOutput
Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License. -->
11+
12+
# EasyAnimateTransformer3DModel
13+
14+
A Diffusion Transformer model for 3D data from [EasyAnimate](https://github.com/aigc-apps/EasyAnimate) was introduced by Alibaba PAI.
15+
16+
The model can be loaded with the following code snippet.
17+
18+
```python
19+
from diffusers import EasyAnimateTransformer3DModel
20+
21+
transformer = EasyAnimateTransformer3DModel.from_pretrained("alibaba-pai/EasyAnimateV5.1-12b-zh", subfolder="transformer", torch_dtype=torch.float16).to("cuda")
22+
```
23+
24+
## EasyAnimateTransformer3DModel
25+
26+
[[autodoc]] EasyAnimateTransformer3DModel
27+
28+
## Transformer2DModelOutput
29+
30+
[[autodoc]] models.modeling_outputs.Transformer2DModelOutput
Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
<!-- Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License. -->
11+
12+
# WanTransformer3DModel
13+
14+
A Diffusion Transformer model for 3D video-like data was introduced in [Wan 2.1](https://github.com/Wan-Video/Wan2.1) by the Alibaba Wan Team.
15+
16+
The model can be loaded with the following code snippet.
17+
18+
```python
19+
from diffusers import WanTransformer3DModel
20+
21+
transformer = WanTransformer3DModel.from_pretrained("Wan-AI/Wan2.1-T2V-1.3B-Diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)
22+
```
23+
24+
## WanTransformer3DModel
25+
26+
[[autodoc]] WanTransformer3DModel
27+
28+
## Transformer2DModelOutput
29+
30+
[[autodoc]] models.modeling_outputs.Transformer2DModelOutput
Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
-->
15+
16+
# EasyAnimate
17+
[EasyAnimate](https://github.com/aigc-apps/EasyAnimate) by Alibaba PAI.
18+
19+
The description from it's GitHub page:
20+
*EasyAnimate is a pipeline based on the transformer architecture, designed for generating AI images and videos, and for training baseline models and Lora models for Diffusion Transformer. We support direct prediction from pre-trained EasyAnimate models, allowing for the generation of videos with various resolutions, approximately 6 seconds in length, at 8fps (EasyAnimateV5.1, 1 to 49 frames). Additionally, users can train their own baseline and Lora models for specific style transformations.*
21+
22+
This pipeline was contributed by [bubbliiiing](https://github.com/bubbliiiing). The original codebase can be found [here](https://huggingface.co/alibaba-pai). The original weights can be found under [hf.co/alibaba-pai](https://huggingface.co/alibaba-pai).
23+
24+
There are two official EasyAnimate checkpoints for text-to-video and video-to-video.
25+
26+
| checkpoints | recommended inference dtype |
27+
|:---:|:---:|
28+
| [`alibaba-pai/EasyAnimateV5.1-12b-zh`](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh) | torch.float16 |
29+
| [`alibaba-pai/EasyAnimateV5.1-12b-zh-InP`](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh-InP) | torch.float16 |
30+
31+
There is one official EasyAnimate checkpoints available for image-to-video and video-to-video.
32+
33+
| checkpoints | recommended inference dtype |
34+
|:---:|:---:|
35+
| [`alibaba-pai/EasyAnimateV5.1-12b-zh-InP`](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh-InP) | torch.float16 |
36+
37+
There are two official EasyAnimate checkpoints available for control-to-video.
38+
39+
| checkpoints | recommended inference dtype |
40+
|:---:|:---:|
41+
| [`alibaba-pai/EasyAnimateV5.1-12b-zh-Control`](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh-Control) | torch.float16 |
42+
| [`alibaba-pai/EasyAnimateV5.1-12b-zh-Control-Camera`](https://huggingface.co/alibaba-pai/EasyAnimateV5.1-12b-zh-Control-Camera) | torch.float16 |
43+
44+
For the EasyAnimateV5.1 series:
45+
- Text-to-video (T2V) and Image-to-video (I2V) works for multiple resolutions. The width and height can vary from 256 to 1024.
46+
- Both T2V and I2V models support generation with 1~49 frames and work best at this value. Exporting videos at 8 FPS is recommended.
47+
48+
## Quantization
49+
50+
Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.
51+
52+
Refer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [`EasyAnimatePipeline`] for inference with bitsandbytes.
53+
54+
```py
55+
import torch
56+
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, EasyAnimateTransformer3DModel, EasyAnimatePipeline
57+
from diffusers.utils import export_to_video
58+
59+
quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
60+
transformer_8bit = EasyAnimateTransformer3DModel.from_pretrained(
61+
"alibaba-pai/EasyAnimateV5.1-12b-zh",
62+
subfolder="transformer",
63+
quantization_config=quant_config,
64+
torch_dtype=torch.float16,
65+
)
66+
67+
pipeline = EasyAnimatePipeline.from_pretrained(
68+
"alibaba-pai/EasyAnimateV5.1-12b-zh",
69+
transformer=transformer_8bit,
70+
torch_dtype=torch.float16,
71+
device_map="balanced",
72+
)
73+
74+
prompt = "A cat walks on the grass, realistic style."
75+
negative_prompt = "bad detailed"
76+
video = pipeline(prompt=prompt, negative_prompt=negative_prompt, num_frames=49, num_inference_steps=30).frames[0]
77+
export_to_video(video, "cat.mp4", fps=8)
78+
```
79+
80+
## EasyAnimatePipeline
81+
82+
[[autodoc]] EasyAnimatePipeline
83+
- all
84+
- __call__
85+
86+
## EasyAnimatePipelineOutput
87+
88+
[[autodoc]] pipelines.easyanimate.pipeline_output.EasyAnimatePipelineOutput

0 commit comments

Comments
 (0)