Skip to content

Commit 74bb1a7

Browse files
authored
Merge branch 'main' into sana-lora
2 parents 8882813 + 2739241 commit 74bb1a7

File tree

63 files changed

+3903
-263
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

63 files changed

+3903
-263
lines changed

.github/workflows/nightly_tests.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -357,6 +357,8 @@ jobs:
357357
config:
358358
- backend: "bitsandbytes"
359359
test_location: "bnb"
360+
- backend: "gguf"
361+
test_location: "gguf"
360362
runs-on:
361363
group: aws-g6e-xlarge-plus
362364
container:

docs/source/en/_toctree.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -157,6 +157,10 @@
157157
title: Getting Started
158158
- local: quantization/bitsandbytes
159159
title: bitsandbytes
160+
- local: quantization/gguf
161+
title: gguf
162+
- local: quantization/torchao
163+
title: torchao
160164
title: Quantization Methods
161165
- sections:
162166
- local: optimization/fp16

docs/source/en/api/attnprocessor.md

Lines changed: 104 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -15,40 +15,133 @@ specific language governing permissions and limitations under the License.
1515
An attention processor is a class for applying different types of attention mechanisms.
1616

1717
## AttnProcessor
18+
1819
[[autodoc]] models.attention_processor.AttnProcessor
1920

20-
## AttnProcessor2_0
2121
[[autodoc]] models.attention_processor.AttnProcessor2_0
2222

23-
## AttnAddedKVProcessor
2423
[[autodoc]] models.attention_processor.AttnAddedKVProcessor
2524

26-
## AttnAddedKVProcessor2_0
2725
[[autodoc]] models.attention_processor.AttnAddedKVProcessor2_0
2826

27+
[[autodoc]] models.attention_processor.AttnProcessorNPU
28+
29+
[[autodoc]] models.attention_processor.FusedAttnProcessor2_0
30+
31+
## Allegro
32+
33+
[[autodoc]] models.attention_processor.AllegroAttnProcessor2_0
34+
35+
## AuraFlow
36+
37+
[[autodoc]] models.attention_processor.AuraFlowAttnProcessor2_0
38+
39+
[[autodoc]] models.attention_processor.FusedAuraFlowAttnProcessor2_0
40+
41+
## CogVideoX
42+
43+
[[autodoc]] models.attention_processor.CogVideoXAttnProcessor2_0
44+
45+
[[autodoc]] models.attention_processor.FusedCogVideoXAttnProcessor2_0
46+
2947
## CrossFrameAttnProcessor
48+
3049
[[autodoc]] pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.CrossFrameAttnProcessor
3150

32-
## CustomDiffusionAttnProcessor
51+
## Custom Diffusion
52+
3353
[[autodoc]] models.attention_processor.CustomDiffusionAttnProcessor
3454

35-
## CustomDiffusionAttnProcessor2_0
3655
[[autodoc]] models.attention_processor.CustomDiffusionAttnProcessor2_0
3756

38-
## CustomDiffusionXFormersAttnProcessor
3957
[[autodoc]] models.attention_processor.CustomDiffusionXFormersAttnProcessor
4058

41-
## FusedAttnProcessor2_0
42-
[[autodoc]] models.attention_processor.FusedAttnProcessor2_0
59+
## Flux
60+
61+
[[autodoc]] models.attention_processor.FluxAttnProcessor2_0
62+
63+
[[autodoc]] models.attention_processor.FusedFluxAttnProcessor2_0
64+
65+
[[autodoc]] models.attention_processor.FluxSingleAttnProcessor2_0
66+
67+
## Hunyuan
68+
69+
[[autodoc]] models.attention_processor.HunyuanAttnProcessor2_0
70+
71+
[[autodoc]] models.attention_processor.FusedHunyuanAttnProcessor2_0
72+
73+
[[autodoc]] models.attention_processor.PAGHunyuanAttnProcessor2_0
74+
75+
[[autodoc]] models.attention_processor.PAGCFGHunyuanAttnProcessor2_0
76+
77+
## IdentitySelfAttnProcessor2_0
78+
79+
[[autodoc]] models.attention_processor.PAGIdentitySelfAttnProcessor2_0
80+
81+
[[autodoc]] models.attention_processor.PAGCFGIdentitySelfAttnProcessor2_0
82+
83+
## IP-Adapter
84+
85+
[[autodoc]] models.attention_processor.IPAdapterAttnProcessor
86+
87+
[[autodoc]] models.attention_processor.IPAdapterAttnProcessor2_0
88+
89+
## JointAttnProcessor2_0
90+
91+
[[autodoc]] models.attention_processor.JointAttnProcessor2_0
92+
93+
[[autodoc]] models.attention_processor.PAGJointAttnProcessor2_0
94+
95+
[[autodoc]] models.attention_processor.PAGCFGJointAttnProcessor2_0
96+
97+
[[autodoc]] models.attention_processor.FusedJointAttnProcessor2_0
98+
99+
## LoRA
100+
101+
[[autodoc]] models.attention_processor.LoRAAttnProcessor
102+
103+
[[autodoc]] models.attention_processor.LoRAAttnProcessor2_0
104+
105+
[[autodoc]] models.attention_processor.LoRAAttnAddedKVProcessor
106+
107+
[[autodoc]] models.attention_processor.LoRAXFormersAttnProcessor
108+
109+
## Lumina-T2X
110+
111+
[[autodoc]] models.attention_processor.LuminaAttnProcessor2_0
112+
113+
## Mochi
114+
115+
[[autodoc]] models.attention_processor.MochiAttnProcessor2_0
116+
117+
[[autodoc]] models.attention_processor.MochiVaeAttnProcessor2_0
118+
119+
## Sana
120+
121+
[[autodoc]] models.attention_processor.SanaLinearAttnProcessor2_0
122+
123+
[[autodoc]] models.attention_processor.SanaMultiscaleAttnProcessor2_0
124+
125+
[[autodoc]] models.attention_processor.PAGCFGSanaLinearAttnProcessor2_0
126+
127+
[[autodoc]] models.attention_processor.PAGIdentitySanaLinearAttnProcessor2_0
128+
129+
## Stable Audio
130+
131+
[[autodoc]] models.attention_processor.StableAudioAttnProcessor2_0
43132

44133
## SlicedAttnProcessor
134+
45135
[[autodoc]] models.attention_processor.SlicedAttnProcessor
46136

47-
## SlicedAttnAddedKVProcessor
48137
[[autodoc]] models.attention_processor.SlicedAttnAddedKVProcessor
49138

50139
## XFormersAttnProcessor
140+
51141
[[autodoc]] models.attention_processor.XFormersAttnProcessor
52142

53-
## AttnProcessorNPU
54-
[[autodoc]] models.attention_processor.AttnProcessorNPU
143+
[[autodoc]] models.attention_processor.XFormersAttnAddedKVProcessor
144+
145+
## XLAFlashAttnProcessor2_0
146+
147+
[[autodoc]] models.attention_processor.XLAFlashAttnProcessor2_0

docs/source/en/api/loaders/lora.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,9 @@ LoRA is a fast and lightweight training method that inserts and trains a signifi
1717
- [`StableDiffusionLoraLoaderMixin`] provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model.
1818
- [`StableDiffusionXLLoraLoaderMixin`] is a [Stable Diffusion (SDXL)](../../api/pipelines/stable_diffusion/stable_diffusion_xl) version of the [`StableDiffusionLoraLoaderMixin`] class for loading and saving LoRA weights. It can only be used with the SDXL model.
1919
- [`SD3LoraLoaderMixin`] provides similar functions for [Stable Diffusion 3](https://huggingface.co/blog/sd3).
20+
- [`FluxLoraLoaderMixin`] provides similar functions for [Flux](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux).
21+
- [`CogVideoXLoraLoaderMixin`] provides similar functions for [CogVideoX](https://huggingface.co/docs/diffusers/main/en/api/pipelines/cogvideox).
22+
- [`Mochi1LoraLoaderMixin`] provides similar functions for [Mochi](https://huggingface.co/docs/diffusers/main/en/api/pipelines/mochi).
2023
- [`AmusedLoraLoaderMixin`] is for the [`AmusedPipeline`].
2124
- [`LoraBaseMixin`] provides a base class with several utility methods to fuse, unfuse, unload, LoRAs and more.
2225

@@ -38,6 +41,18 @@ To learn more about how to load LoRA weights, see the [LoRA](../../using-diffuse
3841

3942
[[autodoc]] loaders.lora_pipeline.SD3LoraLoaderMixin
4043

44+
## FluxLoraLoaderMixin
45+
46+
[[autodoc]] loaders.lora_pipeline.FluxLoraLoaderMixin
47+
48+
## CogVideoXLoraLoaderMixin
49+
50+
[[autodoc]] loaders.lora_pipeline.CogVideoXLoraLoaderMixin
51+
52+
## Mochi1LoraLoaderMixin
53+
54+
[[autodoc]] loaders.lora_pipeline.Mochi1LoraLoaderMixin
55+
4156
## AmusedLoraLoaderMixin
4257

4358
[[autodoc]] loaders.lora_pipeline.AmusedLoraLoaderMixin

docs/source/en/api/models/autoencoder_dc.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,8 @@ The following DCAE models are released and supported in Diffusers.
2929
| [`mit-han-lab/dc-ae-f128c512-in-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f128c512-in-1.0-diffusers) | [`mit-han-lab/dc-ae-f128c512-in-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f128c512-in-1.0)
3030
| [`mit-han-lab/dc-ae-f128c512-mix-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f128c512-mix-1.0-diffusers) | [`mit-han-lab/dc-ae-f128c512-mix-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f128c512-mix-1.0)
3131

32+
This model was contributed by [lawrence-cj](https://github.com/lawrence-cj).
33+
3234
Load a model in Diffusers format with [`~ModelMixin.from_pretrained`].
3335

3436
```python

docs/source/en/api/pipelines/sana.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,8 @@ Available models:
4242

4343
Refer to [this](https://huggingface.co/collections/Efficient-Large-Model/sana-673efba2a57ed99843f11f9e) collection for more information.
4444

45+
Note: The recommended dtype mentioned is for the transformer weights. The text encoder and VAE weights must stay in `torch.bfloat16` or `torch.float32` for the model to work correctly. Please refer to the inference example below to see how to load the model with the recommended dtype.
46+
4547
<Tip>
4648

4749
Make sure to pass the `variant` argument for downloaded checkpoints to use lower disk space. Set it to `"fp16"` for models with recommended dtype as `torch.float16`, and `"bf16"` for models with recommended dtype as `torch.bfloat16`. By default, `torch.float32` weights are downloaded, which use twice the amount of disk storage. Additionally, `torch.float32` weights can be downcasted on-the-fly by specifying the `torch_dtype` argument. Read about it in the [docs](https://huggingface.co/docs/diffusers/v0.31.0/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained).

docs/source/en/api/quantization.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,13 @@ Learn how to quantize models in the [Quantization](../quantization/overview) gui
2828

2929
[[autodoc]] BitsAndBytesConfig
3030

31+
## GGUFQuantizationConfig
32+
33+
[[autodoc]] GGUFQuantizationConfig
34+
## TorchAoConfig
35+
36+
[[autodoc]] TorchAoConfig
37+
3138
## DiffusersQuantizer
3239

3340
[[autodoc]] quantizers.base.DiffusersQuantizer
Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
12+
-->
13+
14+
# GGUF
15+
16+
The GGUF file format is typically used to store models for inference with [GGML](https://github.com/ggerganov/ggml) and supports a variety of block wise quantization options. Diffusers supports loading checkpoints prequantized and saved in the GGUF format via `from_single_file` loading with Model classes. Loading GGUF checkpoints via Pipelines is currently not supported.
17+
18+
The following example will load the [FLUX.1 DEV](https://huggingface.co/black-forest-labs/FLUX.1-dev) transformer model using the GGUF Q2_K quantization variant.
19+
20+
Before starting please install gguf in your environment
21+
22+
```shell
23+
pip install -U gguf
24+
```
25+
26+
Since GGUF is a single file format, use [`~FromSingleFileMixin.from_single_file`] to load the model and pass in the [`GGUFQuantizationConfig`].
27+
28+
When using GGUF checkpoints, the quantized weights remain in a low memory `dtype`(typically `torch.unint8`) and are dynamically dequantized and cast to the configured `compute_dtype` during each module's forward pass through the model. The `GGUFQuantizationConfig` allows you to set the `compute_dtype`.
29+
30+
The functions used for dynamic dequantizatation are based on the great work done by [city96](https://github.com/city96/ComfyUI-GGUF), who created the Pytorch ports of the original (`numpy`)[https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/quants.py] implementation by [compilade](https://github.com/compilade).
31+
32+
```python
33+
import torch
34+
35+
from diffusers import FluxPipeline, FluxTransformer2DModel, GGUFQuantizationConfig
36+
37+
ckpt_path = (
38+
"https://huggingface.co/city96/FLUX.1-dev-gguf/blob/main/flux1-dev-Q2_K.gguf"
39+
)
40+
transformer = FluxTransformer2DModel.from_single_file(
41+
ckpt_path,
42+
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
43+
torch_dtype=torch.bfloat16,
44+
)
45+
pipe = FluxPipeline.from_pretrained(
46+
"black-forest-labs/FLUX.1-dev",
47+
transformer=transformer,
48+
generator=torch.manual_seed(0),
49+
torch_dtype=torch.bfloat16,
50+
)
51+
pipe.enable_model_cpu_offload()
52+
prompt = "A cat holding a sign that says hello world"
53+
image = pipe(prompt).images[0]
54+
image.save("flux-gguf.png")
55+
```
56+
57+
## Supported Quantization Types
58+
59+
- BF16
60+
- Q4_0
61+
- Q4_1
62+
- Q5_0
63+
- Q5_1
64+
- Q8_0
65+
- Q2_K
66+
- Q3_K
67+
- Q4_K
68+
- Q5_K
69+
- Q6_K
70+

docs/source/en/quantization/overview.md

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Quantization techniques focus on representing data with less information while a
1717

1818
<Tip>
1919

20-
Interested in adding a new quantization method to Transformers? Refer to the [Contribute new quantization method guide](https://huggingface.co/docs/transformers/main/en/quantization/contribute) to learn more about adding a new quantization method.
20+
Interested in adding a new quantization method to Diffusers? Refer to the [Contribute new quantization method guide](https://huggingface.co/docs/transformers/main/en/quantization/contribute) to learn more about adding a new quantization method.
2121

2222
</Tip>
2323

@@ -32,4 +32,9 @@ If you are new to the quantization field, we recommend you to check out these be
3232

3333
## When to use what?
3434

35-
This section will be expanded once Diffusers has multiple quantization backends. Currently, we only support `bitsandbytes`. [This resource](https://huggingface.co/docs/transformers/main/en/quantization/overview#when-to-use-what) provides a good overview of the pros and cons of different quantization techniques.
35+
Diffusers currently supports the following quantization methods.
36+
- [BitsandBytes]()
37+
- [TorchAO]()
38+
- [GGUF]()
39+
40+
[This resource](https://huggingface.co/docs/transformers/main/en/quantization/overview#when-to-use-what) provides a good overview of the pros and cons of different quantization techniques.

0 commit comments

Comments
 (0)