Skip to content

Commit 0d1c930

Browse files
authored
Merge branch 'main' into refactor-instructpix2pix_lora-toSupport-peft
2 parents 296ecdd + 233dffd commit 0d1c930

File tree

164 files changed

+19304
-582
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

164 files changed

+19304
-582
lines changed

.github/workflows/nightly_tests.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -357,6 +357,8 @@ jobs:
357357
config:
358358
- backend: "bitsandbytes"
359359
test_location: "bnb"
360+
- backend: "gguf"
361+
test_location: "gguf"
360362
runs-on:
361363
group: aws-g6e-xlarge-plus
362364
container:

.github/workflows/push_tests.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -165,7 +165,8 @@ jobs:
165165
group: gcp-ct5lp-hightpu-8t
166166
container:
167167
image: diffusers/diffusers-flax-tpu
168-
options: --shm-size "16gb" --ipc host --privileged ${{ vars.V5_LITEPOD_8_ENV}} -v /mnt/hf_cache:/mnt/hf_cache defaults:
168+
options: --shm-size "16gb" --ipc host --privileged ${{ vars.V5_LITEPOD_8_ENV}} -v /mnt/hf_cache:/mnt/hf_cache
169+
defaults:
169170
run:
170171
shell: bash
171172
steps:

.github/workflows/push_tests_mps.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ jobs:
4646
shell: arch -arch arm64 bash {0}
4747
run: |
4848
${CONDA_RUN} python -m pip install --upgrade pip uv
49-
${CONDA_RUN} python -m uv pip install -e [quality,test]
49+
${CONDA_RUN} python -m uv pip install -e ".[quality,test]"
5050
${CONDA_RUN} python -m uv pip install torch torchvision torchaudio
5151
${CONDA_RUN} python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
5252
${CONDA_RUN} python -m uv pip install transformers --upgrade

docs/source/en/_toctree.yml

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -157,6 +157,10 @@
157157
title: Getting Started
158158
- local: quantization/bitsandbytes
159159
title: bitsandbytes
160+
- local: quantization/gguf
161+
title: gguf
162+
- local: quantization/torchao
163+
title: torchao
160164
title: Quantization Methods
161165
- sections:
162166
- local: optimization/fp16
@@ -234,6 +238,8 @@
234238
title: Textual Inversion
235239
- local: api/loaders/unet
236240
title: UNet
241+
- local: api/loaders/transformer_sd3
242+
title: SD3Transformer2D
237243
- local: api/loaders/peft
238244
title: PEFT
239245
title: Loaders
@@ -270,6 +276,8 @@
270276
title: FluxTransformer2DModel
271277
- local: api/models/hunyuan_transformer2d
272278
title: HunyuanDiT2DModel
279+
- local: api/models/hunyuan_video_transformer_3d
280+
title: HunyuanVideoTransformer3DModel
273281
- local: api/models/latte_transformer3d
274282
title: LatteTransformer3DModel
275283
- local: api/models/lumina_nextdit2d
@@ -284,6 +292,8 @@
284292
title: PriorTransformer
285293
- local: api/models/sd3_transformer2d
286294
title: SD3Transformer2DModel
295+
- local: api/models/sana_transformer2d
296+
title: SanaTransformer2DModel
287297
- local: api/models/stable_audio_transformer
288298
title: StableAudioDiTModel
289299
- local: api/models/transformer2d
@@ -314,6 +324,8 @@
314324
title: AutoencoderKLAllegro
315325
- local: api/models/autoencoderkl_cogvideox
316326
title: AutoencoderKLCogVideoX
327+
- local: api/models/autoencoder_kl_hunyuan_video
328+
title: AutoencoderKLHunyuanVideo
317329
- local: api/models/autoencoderkl_ltx_video
318330
title: AutoencoderKLLTXVideo
319331
- local: api/models/autoencoderkl_mochi
@@ -390,8 +402,12 @@
390402
title: DiT
391403
- local: api/pipelines/flux
392404
title: Flux
405+
- local: api/pipelines/control_flux_inpaint
406+
title: FluxControlInpaint
393407
- local: api/pipelines/hunyuandit
394408
title: Hunyuan-DiT
409+
- local: api/pipelines/hunyuan_video
410+
title: HunyuanVideo
395411
- local: api/pipelines/i2vgenxl
396412
title: I2VGen-XL
397413
- local: api/pipelines/pix2pix
@@ -434,6 +450,8 @@
434450
title: PixArt-α
435451
- local: api/pipelines/pixart_sigma
436452
title: PixArt-Σ
453+
- local: api/pipelines/sana
454+
title: Sana
437455
- local: api/pipelines/self_attention_guidance
438456
title: Self-Attention Guidance
439457
- local: api/pipelines/semantic_stable_diffusion

docs/source/en/api/attnprocessor.md

Lines changed: 106 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -15,40 +15,135 @@ specific language governing permissions and limitations under the License.
1515
An attention processor is a class for applying different types of attention mechanisms.
1616

1717
## AttnProcessor
18+
1819
[[autodoc]] models.attention_processor.AttnProcessor
1920

20-
## AttnProcessor2_0
2121
[[autodoc]] models.attention_processor.AttnProcessor2_0
2222

23-
## AttnAddedKVProcessor
2423
[[autodoc]] models.attention_processor.AttnAddedKVProcessor
2524

26-
## AttnAddedKVProcessor2_0
2725
[[autodoc]] models.attention_processor.AttnAddedKVProcessor2_0
2826

27+
[[autodoc]] models.attention_processor.AttnProcessorNPU
28+
29+
[[autodoc]] models.attention_processor.FusedAttnProcessor2_0
30+
31+
## Allegro
32+
33+
[[autodoc]] models.attention_processor.AllegroAttnProcessor2_0
34+
35+
## AuraFlow
36+
37+
[[autodoc]] models.attention_processor.AuraFlowAttnProcessor2_0
38+
39+
[[autodoc]] models.attention_processor.FusedAuraFlowAttnProcessor2_0
40+
41+
## CogVideoX
42+
43+
[[autodoc]] models.attention_processor.CogVideoXAttnProcessor2_0
44+
45+
[[autodoc]] models.attention_processor.FusedCogVideoXAttnProcessor2_0
46+
2947
## CrossFrameAttnProcessor
48+
3049
[[autodoc]] pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.CrossFrameAttnProcessor
3150

32-
## CustomDiffusionAttnProcessor
51+
## Custom Diffusion
52+
3353
[[autodoc]] models.attention_processor.CustomDiffusionAttnProcessor
3454

35-
## CustomDiffusionAttnProcessor2_0
3655
[[autodoc]] models.attention_processor.CustomDiffusionAttnProcessor2_0
3756

38-
## CustomDiffusionXFormersAttnProcessor
3957
[[autodoc]] models.attention_processor.CustomDiffusionXFormersAttnProcessor
4058

41-
## FusedAttnProcessor2_0
42-
[[autodoc]] models.attention_processor.FusedAttnProcessor2_0
59+
## Flux
60+
61+
[[autodoc]] models.attention_processor.FluxAttnProcessor2_0
62+
63+
[[autodoc]] models.attention_processor.FusedFluxAttnProcessor2_0
64+
65+
[[autodoc]] models.attention_processor.FluxSingleAttnProcessor2_0
66+
67+
## Hunyuan
68+
69+
[[autodoc]] models.attention_processor.HunyuanAttnProcessor2_0
70+
71+
[[autodoc]] models.attention_processor.FusedHunyuanAttnProcessor2_0
72+
73+
[[autodoc]] models.attention_processor.PAGHunyuanAttnProcessor2_0
74+
75+
[[autodoc]] models.attention_processor.PAGCFGHunyuanAttnProcessor2_0
76+
77+
## IdentitySelfAttnProcessor2_0
78+
79+
[[autodoc]] models.attention_processor.PAGIdentitySelfAttnProcessor2_0
80+
81+
[[autodoc]] models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0
82+
83+
## IP-Adapter
84+
85+
[[autodoc]] models.attention_processor.IPAdapterAttnProcessor
86+
87+
[[autodoc]] models.attention_processor.IPAdapterAttnProcessor2_0
88+
89+
[[autodoc]] models.attention_processor.SD3IPAdapterJointAttnProcessor2_0
90+
91+
## JointAttnProcessor2_0
92+
93+
[[autodoc]] models.attention_processor.JointAttnProcessor2_0
94+
95+
[[autodoc]] models.attention_processor.PAGJointAttnProcessor2_0
96+
97+
[[autodoc]] models.attention_processor.PAGCFGJointAttnProcessor2_0
98+
99+
[[autodoc]] models.attention_processor.FusedJointAttnProcessor2_0
100+
101+
## LoRA
102+
103+
[[autodoc]] models.attention_processor.LoRAAttnProcessor
104+
105+
[[autodoc]] models.attention_processor.LoRAAttnProcessor2_0
106+
107+
[[autodoc]] models.attention_processor.LoRAAttnAddedKVProcessor
108+
109+
[[autodoc]] models.attention_processor.LoRAXFormersAttnProcessor
110+
111+
## Lumina-T2X
112+
113+
[[autodoc]] models.attention_processor.LuminaAttnProcessor2_0
114+
115+
## Mochi
116+
117+
[[autodoc]] models.attention_processor.MochiAttnProcessor2_0
118+
119+
[[autodoc]] models.attention_processor.MochiVaeAttnProcessor2_0
120+
121+
## Sana
122+
123+
[[autodoc]] models.attention_processor.SanaLinearAttnProcessor2_0
124+
125+
[[autodoc]] models.attention_processor.SanaMultiscaleAttnProcessor2_0
126+
127+
[[autodoc]] models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0
128+
129+
[[autodoc]] models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0
130+
131+
## Stable Audio
132+
133+
[[autodoc]] models.attention_processor.StableAudioAttnProcessor2_0
43134

44135
## SlicedAttnProcessor
136+
45137
[[autodoc]] models.attention_processor.SlicedAttnProcessor
46138

47-
## SlicedAttnAddedKVProcessor
48139
[[autodoc]] models.attention_processor.SlicedAttnAddedKVProcessor
49140

50141
## XFormersAttnProcessor
142+
51143
[[autodoc]] models.attention_processor.XFormersAttnProcessor
52144

53-
## AttnProcessorNPU
54-
[[autodoc]] models.attention_processor.AttnProcessorNPU
145+
[[autodoc]] models.attention_processor.XFormersAttnAddedKVProcessor
146+
147+
## XLAFlashAttnProcessor2_0
148+
149+
[[autodoc]] models.attention_processor.XLAFlashAttnProcessor2_0

docs/source/en/api/loaders/ip_adapter.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,12 @@ Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter [loading]
2424

2525
[[autodoc]] loaders.ip_adapter.IPAdapterMixin
2626

27+
## SD3IPAdapterMixin
28+
29+
[[autodoc]] loaders.ip_adapter.SD3IPAdapterMixin
30+
- all
31+
- is_ip_adapter_active
32+
2733
## IPAdapterMaskProcessor
2834

2935
[[autodoc]] image_processor.IPAdapterMaskProcessor

docs/source/en/api/loaders/lora.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,9 @@ LoRA is a fast and lightweight training method that inserts and trains a signifi
1717
- [`StableDiffusionLoraLoaderMixin`] provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model.
1818
- [`StableDiffusionXLLoraLoaderMixin`] is a [Stable Diffusion (SDXL)](../../api/pipelines/stable_diffusion/stable_diffusion_xl) version of the [`StableDiffusionLoraLoaderMixin`] class for loading and saving LoRA weights. It can only be used with the SDXL model.
1919
- [`SD3LoraLoaderMixin`] provides similar functions for [Stable Diffusion 3](https://huggingface.co/blog/sd3).
20+
- [`FluxLoraLoaderMixin`] provides similar functions for [Flux](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux).
21+
- [`CogVideoXLoraLoaderMixin`] provides similar functions for [CogVideoX](https://huggingface.co/docs/diffusers/main/en/api/pipelines/cogvideox).
22+
- [`Mochi1LoraLoaderMixin`] provides similar functions for [Mochi](https://huggingface.co/docs/diffusers/main/en/api/pipelines/mochi).
2023
- [`AmusedLoraLoaderMixin`] is for the [`AmusedPipeline`].
2124
- [`LoraBaseMixin`] provides a base class with several utility methods to fuse, unfuse, unload, LoRAs and more.
2225

@@ -38,6 +41,18 @@ To learn more about how to load LoRA weights, see the [LoRA](../../using-diffuse
3841

3942
[[autodoc]] loaders.lora_pipeline.SD3LoraLoaderMixin
4043

44+
## FluxLoraLoaderMixin
45+
46+
[[autodoc]] loaders.lora_pipeline.FluxLoraLoaderMixin
47+
48+
## CogVideoXLoraLoaderMixin
49+
50+
[[autodoc]] loaders.lora_pipeline.CogVideoXLoraLoaderMixin
51+
52+
## Mochi1LoraLoaderMixin
53+
54+
[[autodoc]] loaders.lora_pipeline.Mochi1LoraLoaderMixin
55+
4156
## AmusedLoraLoaderMixin
4257

4358
[[autodoc]] loaders.lora_pipeline.AmusedLoraLoaderMixin
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# SD3Transformer2D
14+
15+
This class is useful when *only* loading weights into a [`SD3Transformer2DModel`]. If you need to load weights into the text encoder or a text encoder and SD3Transformer2DModel, check [`SD3LoraLoaderMixin`](lora#diffusers.loaders.SD3LoraLoaderMixin) class instead.
16+
17+
The [`SD3Transformer2DLoadersMixin`] class currently only loads IP-Adapter weights, but will be used in the future to save weights and load LoRAs.
18+
19+
<Tip>
20+
21+
To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide.
22+
23+
</Tip>
24+
25+
## SD3Transformer2DLoadersMixin
26+
27+
[[autodoc]] loaders.transformer_sd3.SD3Transformer2DLoadersMixin
28+
- all
29+
- _load_ip_adapter_weights

docs/source/en/api/models/autoencoder_dc.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,8 @@ The following DCAE models are released and supported in Diffusers.
2929
| [`mit-han-lab/dc-ae-f128c512-in-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f128c512-in-1.0-diffusers) | [`mit-han-lab/dc-ae-f128c512-in-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f128c512-in-1.0)
3030
| [`mit-han-lab/dc-ae-f128c512-mix-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f128c512-mix-1.0-diffusers) | [`mit-han-lab/dc-ae-f128c512-mix-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f128c512-mix-1.0)
3131

32+
This model was contributed by [lawrence-cj](https://github.com/lawrence-cj).
33+
3234
Load a model in Diffusers format with [`~ModelMixin.from_pretrained`].
3335

3436
```python
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
<!-- Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License. -->
11+
12+
# AutoencoderKLHunyuanVideo
13+
14+
The 3D variational autoencoder (VAE) model with KL loss used in [HunyuanVideo](https://github.com/Tencent/HunyuanVideo/), which was introduced in [HunyuanVideo: A Systematic Framework For Large Video Generative Models](https://huggingface.co/papers/2412.03603) by Tencent.
15+
16+
The model can be loaded with the following code snippet.
17+
18+
```python
19+
from diffusers import AutoencoderKLHunyuanVideo
20+
21+
vae = AutoencoderKLHunyuanVideo.from_pretrained("tencent/HunyuanVideo", torch_dtype=torch.float16)
22+
```
23+
24+
## AutoencoderKLHunyuanVideo
25+
26+
[[autodoc]] AutoencoderKLHunyuanVideo
27+
- decode
28+
- all
29+
30+
## DecoderOutput
31+
32+
[[autodoc]] models.autoencoders.vae.DecoderOutput

0 commit comments

Comments
 (0)