|  | 
|  | 1 | +<!--Copyright 2025 The HuggingFace Team. All rights reserved. | 
|  | 2 | +
 | 
|  | 3 | +Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | 
|  | 4 | +the License. You may obtain a copy of the License at | 
|  | 5 | +
 | 
|  | 6 | +http://www.apache.org/licenses/LICENSE-2.0 | 
|  | 7 | +
 | 
|  | 8 | +Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | 
|  | 9 | +an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 
|  | 10 | +specific language governing permissions and limitations under the License. | 
|  | 11 | +
 | 
|  | 12 | +--> | 
|  | 13 | + | 
|  | 14 | +# Quanto | 
|  | 15 | + | 
|  | 16 | +[Quanto](https://github.com/huggingface/optimum-quanto) is a PyTorch quantization backend for [Optimum](https://huggingface.co/docs/optimum/en/index). It has been designed with versatility and simplicity in mind: | 
|  | 17 | + | 
|  | 18 | +- All features are available in eager mode (works with non-traceable models) | 
|  | 19 | +- Supports quantization aware training | 
|  | 20 | +- Quantized models are compatible with `torch.compile` | 
|  | 21 | +- Quantized models are Device agnostic (e.g CUDA,XPU,MPS,CPU) | 
|  | 22 | + | 
|  | 23 | +In order to use the Quanto backend, you will first need to install `optimum-quanto>=0.2.6` and `accelerate` | 
|  | 24 | + | 
|  | 25 | +```shell | 
|  | 26 | +pip install optimum-quanto accelerate | 
|  | 27 | +``` | 
|  | 28 | + | 
|  | 29 | +Now you can quantize a model by passing the `QuantoConfig` object to the `from_pretrained()` method. Although the Quanto library does allow quantizing `nn.Conv2d` and `nn.LayerNorm` modules, currently, Diffusers only supports quantizing the weights in the `nn.Linear` layers of a model. The following snippet demonstrates how to apply `float8` quantization with Quanto.    | 
|  | 30 | + | 
|  | 31 | +```python | 
|  | 32 | +import torch | 
|  | 33 | +from diffusers import FluxTransformer2DModel, QuantoConfig | 
|  | 34 | + | 
|  | 35 | +model_id = "black-forest-labs/FLUX.1-dev" | 
|  | 36 | +quantization_config = QuantoConfig(weights_dtype="float8") | 
|  | 37 | +transformer = FluxTransformer2DModel.from_pretrained( | 
|  | 38 | +      model_id, | 
|  | 39 | +      subfolder="transformer", | 
|  | 40 | +      quantization_config=quantization_config, | 
|  | 41 | +      torch_dtype=torch.bfloat16, | 
|  | 42 | +) | 
|  | 43 | + | 
|  | 44 | +pipe = FluxPipeline.from_pretrained(model_id, transformer=transformer, torch_dtype=torch_dtype) | 
|  | 45 | +pipe.to("cuda") | 
|  | 46 | + | 
|  | 47 | +prompt = "A cat holding a sign that says hello world" | 
|  | 48 | +image = pipe( | 
|  | 49 | +    prompt, num_inference_steps=50, guidance_scale=4.5, max_sequence_length=512 | 
|  | 50 | +).images[0] | 
|  | 51 | +image.save("output.png") | 
|  | 52 | +``` | 
|  | 53 | + | 
|  | 54 | +## Skipping Quantization on specific modules | 
|  | 55 | + | 
|  | 56 | +It is possible to skip applying quantization on certain modules using the `modules_to_not_convert` argument in the `QuantoConfig`. Please ensure that the modules passed in to this argument match the keys of the modules in the `state_dict`   | 
|  | 57 | + | 
|  | 58 | +```python | 
|  | 59 | +import torch | 
|  | 60 | +from diffusers import FluxTransformer2DModel, QuantoConfig | 
|  | 61 | + | 
|  | 62 | +model_id = "black-forest-labs/FLUX.1-dev" | 
|  | 63 | +quantization_config = QuantoConfig(weights_dtype="float8", modules_to_not_convert=["proj_out"]) | 
|  | 64 | +transformer = FluxTransformer2DModel.from_pretrained( | 
|  | 65 | +      model_id, | 
|  | 66 | +      subfolder="transformer", | 
|  | 67 | +      quantization_config=quantization_config, | 
|  | 68 | +      torch_dtype=torch.bfloat16, | 
|  | 69 | +) | 
|  | 70 | +``` | 
|  | 71 | + | 
|  | 72 | +## Using `from_single_file` with the Quanto Backend | 
|  | 73 | + | 
|  | 74 | +`QuantoConfig` is compatible with `~FromOriginalModelMixin.from_single_file`.  | 
|  | 75 | + | 
|  | 76 | +```python | 
|  | 77 | +import torch | 
|  | 78 | +from diffusers import FluxTransformer2DModel, QuantoConfig | 
|  | 79 | + | 
|  | 80 | +ckpt_path = "https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/flux1-dev.safetensors" | 
|  | 81 | +quantization_config = QuantoConfig(weights_dtype="float8") | 
|  | 82 | +transformer = FluxTransformer2DModel.from_single_file(ckpt_path, quantization_config=quantization_config, torch_dtype=torch.bfloat16) | 
|  | 83 | +``` | 
|  | 84 | + | 
|  | 85 | +## Saving Quantized models | 
|  | 86 | + | 
|  | 87 | +Diffusers supports serializing Quanto models using the `~ModelMixin.save_pretrained` method. | 
|  | 88 | + | 
|  | 89 | +The serialization and loading requirements are different for models quantized directly with the Quanto library and models quantized | 
|  | 90 | +with Diffusers using Quanto as the backend. It is currently not possible to load models quantized directly with Quanto into Diffusers using `~ModelMixin.from_pretrained` | 
|  | 91 | + | 
|  | 92 | +```python | 
|  | 93 | +import torch | 
|  | 94 | +from diffusers import FluxTransformer2DModel, QuantoConfig | 
|  | 95 | + | 
|  | 96 | +model_id = "black-forest-labs/FLUX.1-dev" | 
|  | 97 | +quantization_config = QuantoConfig(weights_dtype="float8") | 
|  | 98 | +transformer = FluxTransformer2DModel.from_pretrained( | 
|  | 99 | +      model_id, | 
|  | 100 | +      subfolder="transformer", | 
|  | 101 | +      quantization_config=quantization_config, | 
|  | 102 | +      torch_dtype=torch.bfloat16, | 
|  | 103 | +) | 
|  | 104 | +# save quantized model to reuse | 
|  | 105 | +transformer.save_pretrained("<your quantized model save path>") | 
|  | 106 | + | 
|  | 107 | +# you can reload your quantized model with | 
|  | 108 | +model = FluxTransformer2DModel.from_pretrained("<your quantized model save path>") | 
|  | 109 | +``` | 
|  | 110 | + | 
|  | 111 | +## Using `torch.compile` with Quanto | 
|  | 112 | + | 
|  | 113 | +Currently the Quanto backend supports `torch.compile` for the following quantization types: | 
|  | 114 | + | 
|  | 115 | +- `int8` weights  | 
|  | 116 | + | 
|  | 117 | +```python | 
|  | 118 | +import torch | 
|  | 119 | +from diffusers import FluxPipeline, FluxTransformer2DModel, QuantoConfig | 
|  | 120 | + | 
|  | 121 | +model_id = "black-forest-labs/FLUX.1-dev" | 
|  | 122 | +quantization_config = QuantoConfig(weights_dtype="int8") | 
|  | 123 | +transformer = FluxTransformer2DModel.from_pretrained( | 
|  | 124 | +    model_id, | 
|  | 125 | +    subfolder="transformer", | 
|  | 126 | +    quantization_config=quantization_config, | 
|  | 127 | +    torch_dtype=torch.bfloat16, | 
|  | 128 | +) | 
|  | 129 | +transformer = torch.compile(transformer, mode="max-autotune", fullgraph=True) | 
|  | 130 | + | 
|  | 131 | +pipe = FluxPipeline.from_pretrained( | 
|  | 132 | +    model_id, transformer=transformer, torch_dtype=torch_dtype | 
|  | 133 | +) | 
|  | 134 | +pipe.to("cuda") | 
|  | 135 | +images = pipe("A cat holding a sign that says hello").images[0] | 
|  | 136 | +images.save("flux-quanto-compile.png") | 
|  | 137 | +``` | 
|  | 138 | + | 
|  | 139 | +## Supported Quantization Types | 
|  | 140 | + | 
|  | 141 | +### Weights | 
|  | 142 | + | 
|  | 143 | +- float8 | 
|  | 144 | +- int8 | 
|  | 145 | +- int4 | 
|  | 146 | +- int2 | 
|  | 147 | + | 
|  | 148 | + | 
0 commit comments