Skip to content

Commit 08c26a8

Browse files
committed
Merge branch 'main' into compile-ci
2 parents 3b383ed + b5c2050 commit 08c26a8

File tree

21 files changed

+376
-251
lines changed

21 files changed

+376
-251
lines changed

docs/source/en/_toctree.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -208,7 +208,7 @@
208208
- local: optimization/mps
209209
title: Metal Performance Shaders (MPS)
210210
- local: optimization/habana
211-
title: Habana Gaudi
211+
title: Intel Gaudi
212212
- local: optimization/neuron
213213
title: AWS Neuron
214214
title: Optimized hardware

docs/source/en/optimization/habana.md

Lines changed: 12 additions & 57 deletions
Original file line numberDiff line numberDiff line change
@@ -10,67 +10,22 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o
1010
specific language governing permissions and limitations under the License.
1111
-->
1212

13-
# Habana Gaudi
13+
# Intel Gaudi
1414

15-
🤗 Diffusers is compatible with Habana Gaudi through 🤗 [Optimum](https://huggingface.co/docs/optimum/habana/usage_guides/stable_diffusion). Follow the [installation](https://docs.habana.ai/en/latest/Installation_Guide/index.html) guide to install the SynapseAI and Gaudi drivers, and then install Optimum Habana:
15+
The Intel Gaudi AI accelerator family includes [Intel Gaudi 1](https://habana.ai/products/gaudi/), [Intel Gaudi 2](https://habana.ai/products/gaudi2/), and [Intel Gaudi 3](https://habana.ai/products/gaudi3/). Each server is equipped with 8 devices, known as Habana Processing Units (HPUs), providing 128GB of memory on Gaudi 3, 96GB on Gaudi 2, and 32GB on the first-gen Gaudi. For more details on the underlying hardware architecture, check out the [Gaudi Architecture](https://docs.habana.ai/en/latest/Gaudi_Overview/Gaudi_Architecture.html) overview.
1616

17-
```bash
18-
python -m pip install --upgrade-strategy eager optimum[habana]
19-
```
20-
21-
To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances:
22-
23-
- [`~optimum.habana.diffusers.GaudiStableDiffusionPipeline`], a pipeline for text-to-image generation.
24-
- [`~optimum.habana.diffusers.GaudiDDIMScheduler`], a Gaudi-optimized scheduler.
25-
26-
When you initialize the pipeline, you have to specify `use_habana=True` to deploy it on HPUs and to get the fastest possible generation, you should enable **HPU graphs** with `use_hpu_graphs=True`.
17+
Diffusers pipelines can take advantage of HPU acceleration, even if a pipeline hasn't been added to [Optimum for Intel Gaudi](https://huggingface.co/docs/optimum/main/en/habana/index) yet, with the [GPU Migration Toolkit](https://docs.habana.ai/en/latest/PyTorch/PyTorch_Model_Porting/GPU_Migration_Toolkit/GPU_Migration_Toolkit.html).
2718

28-
Finally, specify a [`~optimum.habana.GaudiConfig`] which can be downloaded from the [Habana](https://huggingface.co/Habana) organization on the Hub.
29-
30-
```python
31-
from optimum.habana import GaudiConfig
32-
from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline
33-
34-
model_name = "stabilityai/stable-diffusion-2-base"
35-
scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler")
36-
pipeline = GaudiStableDiffusionPipeline.from_pretrained(
37-
model_name,
38-
scheduler=scheduler,
39-
use_habana=True,
40-
use_hpu_graphs=True,
41-
gaudi_config="Habana/stable-diffusion-2",
42-
)
43-
```
19+
Call `.to("hpu")` on your pipeline to move it to a HPU device as shown below for Flux:
20+
```py
21+
import torch
22+
from diffusers import DiffusionPipeline
4423

45-
Now you can call the pipeline to generate images by batches from one or several prompts:
24+
pipeline = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16)
25+
pipeline.to("hpu")
4626

47-
```python
48-
outputs = pipeline(
49-
prompt=[
50-
"High quality photo of an astronaut riding a horse in space",
51-
"Face of a yellow cat, high resolution, sitting on a park bench",
52-
],
53-
num_images_per_prompt=10,
54-
batch_size=4,
55-
)
27+
image = pipeline("An image of a squirrel in Picasso style").images[0]
5628
```
5729

58-
For more information, check out 🤗 Optimum Habana's [documentation](https://huggingface.co/docs/optimum/habana/usage_guides/stable_diffusion) and the [example](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion) provided in the official GitHub repository.
59-
60-
## Benchmark
61-
62-
We benchmarked Habana's first-generation Gaudi and Gaudi2 with the [Habana/stable-diffusion](https://huggingface.co/Habana/stable-diffusion) and [Habana/stable-diffusion-2](https://huggingface.co/Habana/stable-diffusion-2) Gaudi configurations (mixed precision bf16/fp32) to demonstrate their performance.
63-
64-
For [Stable Diffusion v1.5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) on 512x512 images:
65-
66-
| | Latency (batch size = 1) | Throughput |
67-
| ---------------------- |:------------------------:|:---------------------------:|
68-
| first-generation Gaudi | 3.80s | 0.308 images/s (batch size = 8) |
69-
| Gaudi2 | 1.33s | 1.081 images/s (batch size = 8) |
70-
71-
For [Stable Diffusion v2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1) on 768x768 images:
72-
73-
| | Latency (batch size = 1) | Throughput |
74-
| ---------------------- |:------------------------:|:-------------------------------:|
75-
| first-generation Gaudi | 10.2s | 0.108 images/s (batch size = 4) |
76-
| Gaudi2 | 3.17s | 0.379 images/s (batch size = 8) |
30+
> [!TIP]
31+
> For Gaudi-optimized diffusion pipeline implementations, we recommend using [Optimum for Intel Gaudi](https://huggingface.co/docs/optimum/main/en/habana/index).

docs/source/en/quantization/overview.md

Lines changed: 71 additions & 67 deletions
Original file line numberDiff line numberDiff line change
@@ -13,59 +13,98 @@ specific language governing permissions and limitations under the License.
1313

1414
# Quantization
1515

16-
Quantization techniques focus on representing data with less information while also trying to not lose too much accuracy. This often means converting a data type to represent the same information with fewer bits. For example, if your model weights are stored as 32-bit floating points and they're quantized to 16-bit floating points, this halves the model size which makes it easier to store and reduces memory-usage. Lower precision can also speedup inference because it takes less time to perform calculations with fewer bits.
16+
Quantization focuses on representing data with fewer bits while also trying to preserve the precision of the original data. This often means converting a data type to represent the same information with fewer bits. For example, if your model weights are stored as 32-bit floating points and they're quantized to 16-bit floating points, this halves the model size which makes it easier to store and reduces memory usage. Lower precision can also speedup inference because it takes less time to perform calculations with fewer bits.
1717

18-
<Tip>
18+
Diffusers supports multiple quantization backends to make large diffusion models like [Flux](../api/pipelines/flux) more accessible. This guide shows how to use the [`~quantizers.PipelineQuantizationConfig`] class to quantize a pipeline during its initialization from a pretrained or non-quantized checkpoint.
1919

20-
Interested in adding a new quantization method to Diffusers? Refer to the [Contribute new quantization method guide](https://huggingface.co/docs/transformers/main/en/quantization/contribute) to learn more about adding a new quantization method.
20+
## Pipeline-level quantization
2121

22-
</Tip>
22+
There are two ways you can use [`~quantizers.PipelineQuantizationConfig`] depending on the level of control you want over the quantization specifications of each model in the pipeline.
2323

24-
<Tip>
24+
- for more basic and simple use cases, you only need to define the `quant_backend`, `quant_kwargs`, and `components_to_quantize`
25+
- for more granular quantization control, provide a `quant_mapping` that provides the quantization specifications for the individual model components
2526

26-
If you are new to the quantization field, we recommend you to check out these beginner-friendly courses about quantization in collaboration with DeepLearning.AI:
27+
### Simple quantization
2728

28-
* [Quantization Fundamentals with Hugging Face](https://www.deeplearning.ai/short-courses/quantization-fundamentals-with-hugging-face/)
29-
* [Quantization in Depth](https://www.deeplearning.ai/short-courses/quantization-in-depth/)
29+
Initialize [`~quantizers.PipelineQuantizationConfig`] with the following parameters.
3030

31-
</Tip>
31+
- `quant_backend` specifies which quantization backend to use. Currently supported backends include: `bitsandbytes_4bit`, `bitsandbytes_8bit`, `gguf`, `quanto`, and `torchao`.
32+
- `quant_kwargs` contains the specific quantization arguments to use.
33+
- `components_to_quantize` specifies which components of the pipeline to quantize. Typically, you should quantize the most compute intensive components like the transformer. The text encoder is another component to consider quantizing if a pipeline has more than one such as [`FluxPipeline`]. The example below quantizes the T5 text encoder in [`FluxPipeline`] while keeping the CLIP model intact.
3234

33-
## When to use what?
35+
```py
36+
import torch
37+
from diffusers import DiffusionPipeline
38+
from diffusers.quantizers import PipelineQuantizationConfig
3439

35-
Diffusers currently supports the following quantization methods.
36-
- [BitsandBytes](./bitsandbytes)
37-
- [TorchAO](./torchao)
38-
- [GGUF](./gguf)
39-
- [Quanto](./quanto.md)
40+
pipeline_quant_config = PipelineQuantizationConfig(
41+
quant_backend="bitsandbytes_4bit",
42+
quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16},
43+
components_to_quantize=["transformer", "text_encoder_2"],
44+
)
45+
```
4046

41-
[This resource](https://huggingface.co/docs/transformers/main/en/quantization/overview#when-to-use-what) provides a good overview of the pros and cons of different quantization techniques.
47+
Pass the `pipeline_quant_config` to [`~DiffusionPipeline.from_pretrained`] to quantize the pipeline.
4248

43-
## Pipeline-level quantization
49+
```py
50+
pipe = DiffusionPipeline.from_pretrained(
51+
"black-forest-labs/FLUX.1-dev",
52+
quantization_config=pipeline_quant_config,
53+
torch_dtype=torch.bfloat16,
54+
).to("cuda")
55+
56+
image = pipe("photo of a cute dog").images[0]
57+
```
4458

45-
Diffusers allows users to directly initialize pipelines from checkpoints that may contain quantized models ([example](https://huggingface.co/hf-internal-testing/flux.1-dev-nf4-pkg)). However, users may want to apply
46-
quantization on-the-fly when initializing a pipeline from a pre-trained and non-quantized checkpoint. You can
47-
do this with [`~quantizers.PipelineQuantizationConfig`].
59+
### quant_mapping
4860

49-
Start by defining a `PipelineQuantizationConfig`:
61+
The `quant_mapping` argument provides more flexible options for how to quantize each individual component in a pipeline, like combining different quantization backends.
62+
63+
Initialize [`~quantizers.PipelineQuantizationConfig`] and pass a `quant_mapping` to it. The `quant_mapping` allows you to specify the quantization options for each component in the pipeline such as the transformer and text encoder.
64+
65+
The example below uses two quantization backends, [`~quantizers.QuantoConfig`] and [`transformers.BitsAndBytesConfig`], for the transformer and text encoder.
5066

5167
```py
5268
import torch
5369
from diffusers import DiffusionPipeline
70+
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
5471
from diffusers.quantizers.quantization_config import QuantoConfig
5572
from diffusers.quantizers import PipelineQuantizationConfig
56-
from transformers import BitsAndBytesConfig
73+
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig
5774

5875
pipeline_quant_config = PipelineQuantizationConfig(
5976
quant_mapping={
6077
"transformer": QuantoConfig(weights_dtype="int8"),
61-
"text_encoder_2": BitsAndBytesConfig(
78+
"text_encoder_2": TransformersBitsAndBytesConfig(
6279
load_in_4bit=True, compute_dtype=torch.bfloat16
6380
),
6481
}
6582
)
6683
```
6784

68-
Then pass it to [`~DiffusionPipeline.from_pretrained`] and run inference:
85+
There is a separate bitsandbytes backend in [Transformers](https://huggingface.co/docs/transformers/main_classes/quantization#transformers.BitsAndBytesConfig). You need to import and use [`transformers.BitsAndBytesConfig`] for components that come from Transformers. For example, `text_encoder_2` in [`FluxPipeline`] is a [`~transformers.T5EncoderModel`] from Transformers so you need to use [`transformers.BitsAndBytesConfig`] instead of [`diffusers.BitsAndBytesConfig`].
86+
87+
> [!TIP]
88+
> Use the [simple quantization](#simple-quantization) method above if you don't want to manage these distinct imports or aren't sure where each pipeline component comes from.
89+
90+
```py
91+
import torch
92+
from diffusers import DiffusionPipeline
93+
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
94+
from diffusers.quantizers import PipelineQuantizationConfig
95+
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig
96+
97+
pipeline_quant_config = PipelineQuantizationConfig(
98+
quant_mapping={
99+
"transformer": DiffusersBitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16),
100+
"text_encoder_2": TransformersBitsAndBytesConfig(
101+
load_in_4bit=True, compute_dtype=torch.bfloat16
102+
),
103+
}
104+
)
105+
```
106+
107+
Pass the `pipeline_quant_config` to [`~DiffusionPipeline.from_pretrained`] to quantize the pipeline.
69108

70109
```py
71110
pipe = DiffusionPipeline.from_pretrained(
@@ -77,52 +116,17 @@ pipe = DiffusionPipeline.from_pretrained(
77116
image = pipe("photo of a cute dog").images[0]
78117
```
79118

80-
This method allows for more granular control over the quantization specifications of individual
81-
model-level components of a pipeline. It also allows for different quantization backends for
82-
different components. In the above example, you used a combination of Quanto and BitsandBytes. However,
83-
one caveat of this method is that users need to know which components come from `transformers` to be able
84-
to import the right quantization config class.
119+
## Resources
85120

86-
The other method is simpler in terms of experience but is
87-
less-flexible. Start by defining a `PipelineQuantizationConfig` but in a different way:
121+
Check out the resources below to learn more about quantization.
88122

89-
```py
90-
pipeline_quant_config = PipelineQuantizationConfig(
91-
quant_backend="bitsandbytes_4bit",
92-
quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16},
93-
components_to_quantize=["transformer", "text_encoder_2"],
94-
)
95-
```
96-
97-
This `pipeline_quant_config` can now be passed to [`~DiffusionPipeline.from_pretrained`] similar to the above example.
98-
99-
In this case, `quant_kwargs` will be used to initialize the quantization specifications
100-
of the respective quantization configuration class of `quant_backend`. `components_to_quantize`
101-
is used to denote the components that will be quantized. For most pipelines, you would want to
102-
keep `transformer` in the list as that is often the most compute and memory intensive.
103-
104-
The config below will work for most diffusion pipelines that have a `transformer` component present.
105-
In most case, you will want to quantize the `transformer` component as that is often the most compute-
106-
intensive part of a diffusion pipeline.
107-
108-
```py
109-
pipeline_quant_config = PipelineQuantizationConfig(
110-
quant_backend="bitsandbytes_4bit",
111-
quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16},
112-
components_to_quantize=["transformer"],
113-
)
114-
```
123+
- If you are new to quantization, we recommend checking out the following beginner-friendly courses in collaboration with DeepLearning.AI.
115124

116-
Below is a list of the supported quantization backends available in both `diffusers` and `transformers`:
125+
- [Quantization Fundamentals with Hugging Face](https://www.deeplearning.ai/short-courses/quantization-fundamentals-with-hugging-face/)
126+
- [Quantization in Depth](https://www.deeplearning.ai/short-courses/quantization-in-depth/)
117127

118-
* `bitsandbytes_4bit`
119-
* `bitsandbytes_8bit`
120-
* `gguf`
121-
* `quanto`
122-
* `torchao`
128+
- Refer to the [Contribute new quantization method guide](https://huggingface.co/docs/transformers/main/en/quantization/contribute) if you're interested in adding a new quantization method.
123129

130+
- The Transformers quantization [Overview](https://huggingface.co/docs/transformers/quantization/overview#when-to-use-what) provides an overview of the pros and cons of different quantization backends.
124131

125-
Diffusion pipelines can have multiple text encoders. [`FluxPipeline`] has two, for example. It's
126-
recommended to quantize the text encoders that are memory-intensive. Some examples include T5,
127-
Llama, Gemma, etc. In the above example, you quantized the T5 model of [`FluxPipeline`] through
128-
`text_encoder_2` while keeping the CLIP model intact (accessible through `text_encoder`).
132+
- Read the [Exploring Quantization Backends in Diffusers](https://huggingface.co/blog/diffusers-quantization) blog post for a brief introduction to each quantization backend, how to choose a backend, and combining quantization with other memory optimizations.

docs/source/ko/_toctree.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@
175175
- local: optimization/mps
176176
title: Metal Performance Shaders (MPS)
177177
- local: optimization/habana
178-
title: Habana Gaudi
178+
title: Intel Gaudi
179179
title: 최적화된 하드웨어
180180
title: 추론 가속화와 메모리 줄이기
181181
- sections:

docs/source/ko/optimization/habana.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o
1010
specific language governing permissions and limitations under the License.
1111
-->
1212

13-
# Habana Gaudi에서 Stable Diffusion을 사용하는 방법
13+
# Intel Gaudi에서 Stable Diffusion을 사용하는 방법
1414

1515
🤗 Diffusers는 🤗 [Optimum Habana](https://huggingface.co/docs/optimum/habana/usage_guides/stable_diffusion)를 통해서 Habana Gaudi와 호환됩니다.
1616

0 commit comments

Comments
 (0)