Skip to content

Commit 5009449

Browse files
authored
Merge branch 'main' into attn-refactor-ltx
2 parents 8c8b44a + 843e3f9 commit 5009449

File tree

158 files changed

+1643
-223
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

158 files changed

+1643
-223
lines changed

benchmarks/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ pip install -r requirements.txt
3131
We need to be authenticated to access some of the checkpoints used during benchmarking:
3232

3333
```sh
34-
huggingface-cli login
34+
hf auth login
3535
```
3636

3737
We use an L40 GPU with 128GB RAM to run the benchmark CI. As such, the benchmarks are configured to run on NVIDIA GPUs. So, make sure you have access to a similar machine (or modify the benchmarking scripts accordingly).

docs/source/en/_toctree.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,7 @@
179179
isExpanded: false
180180
sections:
181181
- local: quantization/overview
182-
title: Getting Started
182+
title: Getting started
183183
- local: quantization/bitsandbytes
184184
title: bitsandbytes
185185
- local: quantization/gguf

docs/source/en/api/configuration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Schedulers from [`~schedulers.scheduling_utils.SchedulerMixin`] and models from
1616

1717
<Tip>
1818

19-
To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `huggingface-cli login`.
19+
To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `hf auth login`.
2020

2121
</Tip>
2222

docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ _As the model is gated, before using it with diffusers you first need to go to t
3131
Use the command below to log in:
3232

3333
```bash
34-
huggingface-cli login
34+
hf auth login
3535
```
3636

3737
<Tip>

docs/source/en/api/quantization.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,19 +27,19 @@ Learn how to quantize models in the [Quantization](../quantization/overview) gui
2727

2828
## BitsAndBytesConfig
2929

30-
[[autodoc]] BitsAndBytesConfig
30+
[[autodoc]] quantizers.quantization_config.BitsAndBytesConfig
3131

3232
## GGUFQuantizationConfig
3333

34-
[[autodoc]] GGUFQuantizationConfig
34+
[[autodoc]] quantizers.quantization_config.GGUFQuantizationConfig
3535

3636
## QuantoConfig
3737

38-
[[autodoc]] QuantoConfig
38+
[[autodoc]] quantizers.quantization_config.QuantoConfig
3939

4040
## TorchAoConfig
4141

42-
[[autodoc]] TorchAoConfig
42+
[[autodoc]] quantizers.quantization_config.TorchAoConfig
4343

4444
## DiffusersQuantizer
4545

docs/source/en/quantization/overview.md

Lines changed: 17 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -11,27 +11,33 @@ specific language governing permissions and limitations under the License.
1111
1212
-->
1313

14-
# Quantization
14+
# Getting started
1515

1616
Quantization focuses on representing data with fewer bits while also trying to preserve the precision of the original data. This often means converting a data type to represent the same information with fewer bits. For example, if your model weights are stored as 32-bit floating points and they're quantized to 16-bit floating points, this halves the model size which makes it easier to store and reduces memory usage. Lower precision can also speedup inference because it takes less time to perform calculations with fewer bits.
1717

1818
Diffusers supports multiple quantization backends to make large diffusion models like [Flux](../api/pipelines/flux) more accessible. This guide shows how to use the [`~quantizers.PipelineQuantizationConfig`] class to quantize a pipeline during its initialization from a pretrained or non-quantized checkpoint.
1919

2020
## Pipeline-level quantization
2121

22-
There are two ways you can use [`~quantizers.PipelineQuantizationConfig`] depending on the level of control you want over the quantization specifications of each model in the pipeline.
22+
There are two ways to use [`~quantizers.PipelineQuantizationConfig`] depending on how much customization you want to apply to the quantization configuration.
2323

24-
- for more basic and simple use cases, you only need to define the `quant_backend`, `quant_kwargs`, and `components_to_quantize`
25-
- for more granular quantization control, provide a `quant_mapping` that provides the quantization specifications for the individual model components
24+
- for basic use cases, define the `quant_backend`, `quant_kwargs`, and `components_to_quantize` arguments
25+
- for granular quantization control, define a `quant_mapping` that provides the quantization configuration for individual model components
2626

27-
### Simple quantization
27+
### Basic quantization
2828

2929
Initialize [`~quantizers.PipelineQuantizationConfig`] with the following parameters.
3030

3131
- `quant_backend` specifies which quantization backend to use. Currently supported backends include: `bitsandbytes_4bit`, `bitsandbytes_8bit`, `gguf`, `quanto`, and `torchao`.
32-
- `quant_kwargs` contains the specific quantization arguments to use.
32+
- `quant_kwargs` specifies the quantization arguments to use.
33+
34+
> [!TIP]
35+
> These `quant_kwargs` arguments are different for each backend. Refer to the [Quantization API](../api/quantization) docs to view the arguments for each backend.
36+
3337
- `components_to_quantize` specifies which components of the pipeline to quantize. Typically, you should quantize the most compute intensive components like the transformer. The text encoder is another component to consider quantizing if a pipeline has more than one such as [`FluxPipeline`]. The example below quantizes the T5 text encoder in [`FluxPipeline`] while keeping the CLIP model intact.
3438

39+
The example below loads the bitsandbytes backend with the following arguments from [`~quantizers.quantization_config.BitsAndBytesConfig`], `load_in_4bit`, `bnb_4bit_quant_type`, and `bnb_4bit_compute_dtype`.
40+
3541
```py
3642
import torch
3743
from diffusers import DiffusionPipeline
@@ -56,13 +62,13 @@ pipe = DiffusionPipeline.from_pretrained(
5662
image = pipe("photo of a cute dog").images[0]
5763
```
5864

59-
### quant_mapping
65+
### Advanced quantization
6066

61-
The `quant_mapping` argument provides more flexible options for how to quantize each individual component in a pipeline, like combining different quantization backends.
67+
The `quant_mapping` argument provides more options for how to quantize each individual component in a pipeline, like combining different quantization backends.
6268

6369
Initialize [`~quantizers.PipelineQuantizationConfig`] and pass a `quant_mapping` to it. The `quant_mapping` allows you to specify the quantization options for each component in the pipeline such as the transformer and text encoder.
6470

65-
The example below uses two quantization backends, [`~quantizers.QuantoConfig`] and [`transformers.BitsAndBytesConfig`], for the transformer and text encoder.
71+
The example below uses two quantization backends, [`~quantizers.quantization_config.QuantoConfig`] and [`transformers.BitsAndBytesConfig`], for the transformer and text encoder.
6672

6773
```py
6874
import torch
@@ -85,7 +91,7 @@ pipeline_quant_config = PipelineQuantizationConfig(
8591
There is a separate bitsandbytes backend in [Transformers](https://huggingface.co/docs/transformers/main_classes/quantization#transformers.BitsAndBytesConfig). You need to import and use [`transformers.BitsAndBytesConfig`] for components that come from Transformers. For example, `text_encoder_2` in [`FluxPipeline`] is a [`~transformers.T5EncoderModel`] from Transformers so you need to use [`transformers.BitsAndBytesConfig`] instead of [`diffusers.BitsAndBytesConfig`].
8692

8793
> [!TIP]
88-
> Use the [simple quantization](#simple-quantization) method above if you don't want to manage these distinct imports or aren't sure where each pipeline component comes from.
94+
> Use the [basic quantization](#basic-quantization) method above if you don't want to manage these distinct imports or aren't sure where each pipeline component comes from.
8995
9096
```py
9197
import torch
@@ -129,4 +135,4 @@ Check out the resources below to learn more about quantization.
129135

130136
- The Transformers quantization [Overview](https://huggingface.co/docs/transformers/quantization/overview#when-to-use-what) provides an overview of the pros and cons of different quantization backends.
131137

132-
- Read the [Exploring Quantization Backends in Diffusers](https://huggingface.co/blog/diffusers-quantization) blog post for a brief introduction to each quantization backend, how to choose a backend, and combining quantization with other memory optimizations.
138+
- Read the [Exploring Quantization Backends in Diffusers](https://huggingface.co/blog/diffusers-quantization) blog post for a brief introduction to each quantization backend, how to choose a backend, and combining quantization with other memory optimizations.

docs/source/en/training/cogvideox.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -145,10 +145,10 @@ When running `accelerate config`, if you use torch.compile, there can be dramati
145145
If you would like to push your model to the Hub after training is completed with a neat model card, make sure you're logged in:
146146

147147
```bash
148-
huggingface-cli login
148+
hf auth login
149149

150150
# Alternatively, you could upload your model manually using:
151-
# huggingface-cli upload my-cool-account-name/my-cool-lora-name /path/to/awesome/lora
151+
# hf upload my-cool-account-name/my-cool-lora-name /path/to/awesome/lora
152152
```
153153

154154
Make sure your data is prepared as described in [Data Preparation](#data-preparation). When ready, you can begin training!

docs/source/en/training/create_dataset.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ dataset = load_dataset(
6767
Then use the [`~datasets.Dataset.push_to_hub`] method to upload the dataset to the Hub:
6868

6969
```python
70-
# assuming you have ran the huggingface-cli login command in a terminal
70+
# assuming you have ran the hf auth login command in a terminal
7171
dataset.push_to_hub("name_of_your_dataset")
7272

7373
# if you want to push to a private repo, simply pass private=True:

docs/source/en/tutorials/basic_training.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ We encourage you to share your model with the community, and in order to do that
4242
Or login in from the terminal:
4343

4444
```bash
45-
huggingface-cli login
45+
hf auth login
4646
```
4747

4848
Since the model checkpoints are quite large, install [Git-LFS](https://git-lfs.com/) to version these large files:

docs/source/en/tutorials/using_peft_for_inference.md

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -319,6 +319,19 @@ If you expect to varied resolutions during inference with this feature, then mak
319319

320320
There are still scenarios where recompulation is unavoidable, such as when the hotswapped LoRA targets more layers than the initial adapter. Try to load the LoRA that targets the most layers *first*. For more details about this limitation, refer to the PEFT [hotswapping](https://huggingface.co/docs/peft/main/en/package_reference/hotswap#peft.utils.hotswap.hotswap_adapter) docs.
321321

322+
<details>
323+
<summary>Technical details of hotswapping</summary>
324+
325+
The [`~loaders.lora_base.LoraBaseMixin.enable_lora_hotswap`] method converts the LoRA scaling factor from floats to torch.tensors and pads the shape of the weights to the largest required shape to avoid reassigning the whole attribute when the data in the weights are replaced.
326+
327+
This is why the `max_rank` argument is important. The results are unchanged even when the values are padded with zeros. Computation may be slower though depending on the padding size.
328+
329+
Since no new LoRA attributes are added, each subsequent LoRA is only allowed to target the same layers, or subset of layers, the first LoRA targets. Choosing the LoRA loading order is important because if the LoRAs target disjoint layers, you may end up creating a dummy LoRA that targets the union of all target layers.
330+
331+
For more implementation details, take a look at the [`hotswap.py`](https://github.com/huggingface/peft/blob/92d65cafa51c829484ad3d95cf71d09de57ff066/src/peft/utils/hotswap.py) file.
332+
333+
</details>
334+
322335
## Merge
323336

324337
The weights from each LoRA can be merged together to produce a blend of multiple existing styles. There are several methods for merging LoRAs, each of which differ in *how* the weights are merged (may affect generation quality).
@@ -673,4 +686,6 @@ Browse the [LoRA Studio](https://lorastudio.co/models) for different LoRAs to us
673686
height="450"
674687
></iframe>
675688
676-
You can find additional LoRAs in the [FLUX LoRA the Explorer](https://huggingface.co/spaces/multimodalart/flux-lora-the-explorer) and [LoRA the Explorer](https://huggingface.co/spaces/multimodalart/LoraTheExplorer) Spaces.
689+
You can find additional LoRAs in the [FLUX LoRA the Explorer](https://huggingface.co/spaces/multimodalart/flux-lora-the-explorer) and [LoRA the Explorer](https://huggingface.co/spaces/multimodalart/LoraTheExplorer) Spaces.
690+
691+
Check out the [Fast LoRA inference for Flux with Diffusers and PEFT](https://huggingface.co/blog/lora-fast) blog post to learn how to optimize LoRA inference with methods like FlashAttention-3 and fp8 quantization.

0 commit comments

Comments
 (0)