Skip to content

Commit 42673fa

Browse files
ariG23498stevhliu
andauthored
Update docs/source/en/quantization/bitsandbytes.md
Co-authored-by: Steven Liu <[email protected]>
1 parent 674b60a commit 42673fa

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/source/en/quantization/bitsandbytes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,7 @@ You can also save the serialized 8-bit models locally with [`~ModelMixin.save_pr
132132

133133
Quantizing a model in 4-bit reduces your memory-usage by 4x.
134134

135-
bitsandbytes` is supported in both Transformers and Diffusers, so you can can quantize both the
135+
bitsandbytes is supported in both Transformers and Diffusers, so you can can quantize both the
136136
[`FluxTransformer2DModel`] and [`~transformers.T5EncoderModel`].
137137

138138
```py

0 commit comments

Comments
 (0)