We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 674b60a commit 42673faCopy full SHA for 42673fa
docs/source/en/quantization/bitsandbytes.md
@@ -132,7 +132,7 @@ You can also save the serialized 8-bit models locally with [`~ModelMixin.save_pr
132
133
Quantizing a model in 4-bit reduces your memory-usage by 4x.
134
135
-bitsandbytes` is supported in both Transformers and Diffusers, so you can can quantize both the
+bitsandbytes is supported in both Transformers and Diffusers, so you can can quantize both the
136
[`FluxTransformer2DModel`] and [`~transformers.T5EncoderModel`].
137
138
```py
0 commit comments