You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/en/api/pipelines/hunyuan_video.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,6 +52,7 @@ The following models are available for the image-to-video pipeline:
52
52
|[`Skywork/SkyReels-V1-Hunyuan-I2V`](https://huggingface.co/Skywork/SkyReels-V1-Hunyuan-I2V)| Skywork's custom finetune of HunyuanVideo (de-distilled). Performs best with `97x544x960` resolution. Performs best at `97x544x960` resolution, `guidance_scale=1.0`, `true_cfg_scale=6.0` and a negative prompt. |
53
53
|[`hunyuanvideo-community/HunyuanVideo-I2V-33ch`](https://huggingface.co/hunyuanvideo-community/HunyuanVideo-I2V)| Tecent's official HunyuanVideo 33-channel I2V model. Performs best at resolutions of 480, 720, 960, 1280. A higher `shift` value when initializing the scheduler is recommended (good values are between 7 and 20). |
54
54
|[`hunyuanvideo-community/HunyuanVideo-I2V`](https://huggingface.co/hunyuanvideo-community/HunyuanVideo-I2V)| Tecent's official HunyuanVideo 16-channel I2V model. Performs best at resolutions of 480, 720, 960, 1280. A higher `shift` value when initializing the scheduler is recommended (good values are between 7 and 20) |
55
+
-[`lllyasviel/FramePackI2V_HY`](https://huggingface.co/lllyasviel/FramePackI2V_HY) | lllyasviel's paper introducing a new technique for long-context video generation called [Framepack](https://arxiv.org/abs/2504.12626). |
Copy file name to clipboardExpand all lines: docs/source/en/quantization/bitsandbytes.md
+9-3Lines changed: 9 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,7 +48,7 @@ For Ada and higher-series GPUs. we recommend changing `torch_dtype` to `torch.bf
48
48
```py
49
49
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
50
50
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig
51
-
51
+
import torch
52
52
from diffusers import AutoModel
53
53
from transformers import T5EncoderModel
54
54
@@ -88,6 +88,8 @@ Setting `device_map="auto"` automatically fills all available space on the GPU(s
88
88
CPU, and finally, the hard drive (the absolute slowest option) if there is still not enough memory.
89
89
90
90
```py
91
+
from diffusers import FluxPipeline
92
+
91
93
pipe = FluxPipeline.from_pretrained(
92
94
"black-forest-labs/FLUX.1-dev",
93
95
transformer=transformer_8bit,
@@ -132,7 +134,7 @@ For Ada and higher-series GPUs. we recommend changing `torch_dtype` to `torch.bf
132
134
```py
133
135
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
134
136
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig
135
-
137
+
import torch
136
138
from diffusers import AutoModel
137
139
from transformers import T5EncoderModel
138
140
@@ -171,6 +173,8 @@ Let's generate an image using our quantized models.
171
173
Setting `device_map="auto"` automatically fills all available space on the GPU(s) first, then the CPU, and finally, the hard drive (the absolute slowest option) if there is still not enough memory.
172
174
173
175
```py
176
+
from diffusers import FluxPipeline
177
+
174
178
pipe = FluxPipeline.from_pretrained(
175
179
"black-forest-labs/FLUX.1-dev",
176
180
transformer=transformer_4bit,
@@ -214,6 +218,8 @@ Check your memory footprint with the `get_memory_footprint` method:
214
218
print(model.get_memory_footprint())
215
219
```
216
220
221
+
Note that this only tells you the memory footprint of the model params and does _not_ estimate the inference memory requirements.
222
+
217
223
Quantized models can be loaded from the [`~ModelMixin.from_pretrained`] method without needing to specify the `quantization_config` parameters:
218
224
219
225
```py
@@ -413,4 +419,4 @@ transformer_4bit.dequantize()
413
419
## Resources
414
420
415
421
*[End-to-end notebook showing Flux.1 Dev inference in a free-tier Colab](https://gist.github.com/sayakpaul/c76bd845b48759e11687ac550b99d8b4)
|**Floating point X-bit quantization**|`fpx_weight_only`|`fpX_eAwB` where `X` is the number of bits (1-7), `A` is exponent bits, and `B` is mantissa bits. Constraint: `X == A + B + 1`|
Copy file name to clipboardExpand all lines: examples/dreambooth/README_hidream.md
+27Lines changed: 27 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -117,3 +117,30 @@ We provide several options for optimizing memory optimization:
117
117
*`--use_8bit_adam`: When enabled, we will use the 8bit version of AdamW provided by the `bitsandbytes` library.
118
118
119
119
Refer to the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/) of the `HiDreamImagePipeline` to know more about the model.
120
+
121
+
## Using quantization
122
+
123
+
You can quantize the base model with [`bitsandbytes`](https://huggingface.co/docs/bitsandbytes/index) to reduce memory usage. To do so, pass a JSON file path to `--bnb_quantization_config_path`. This file should hold the configuration to initialize `BitsAndBytesConfig`. Below is an example JSON file:
124
+
125
+
```json
126
+
{
127
+
"load_in_4bit": true,
128
+
"bnb_4bit_quant_type": "nf4"
129
+
}
130
+
```
131
+
132
+
Below, we provide some numbers with and without the use of NF4 quantization when training:
The reason why we see some memory before device placement in the case of quantization is because, by default bnb quantized models are placed on the GPU first.
0 commit comments