Skip to content

Commit da61e8f

Browse files
committed
update
1 parent 0ac52d6 commit da61e8f

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

docs/source/en/quantization/gguf.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,9 @@ pip install -U gguf
2525

2626
Since GGUF is a single file format, use [`~FromSingleFileMixin.from_single_file`] to load the model and pass in the [`GGUFQuantizationConfig`].
2727

28-
When using GGUF checkpoints, the quantized weights remain in a low memory `dtype`(typically `torch.unint8`) and are dynamically dequantized and cast to the configured `compute_dtype` during each module's forward pass through the model. The `GGUFQuantizationConfig` allows you to set the `compute_dtype`.
28+
When using GGUF checkpoints, the quantized weights remain in a low memory `dtype`(typically `torch.unint8`) and are dynamically dequantized and cast to the configured `compute_dtype` during each module's forward pass through the model. The `GGUFQuantizationConfig` allows you to set the `compute_dtype`.
2929

30-
The functions used for dynamic dequantizatation are based on the great work done by [city96](https://github.com/city96/ComfyUI-GGUF), who created the Pytorch ports of the original (`numpy`)[https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/quants.py] implementation by [compilade](https://github.com/compilade).
30+
The functions used for dynamic dequantizatation are based on the great work done by [city96](https://github.com/city96/ComfyUI-GGUF), who created the Pytorch ports of the original [`numpy`](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/quants.py) implementation by [compilade](https://github.com/compilade).
3131

3232
```python
3333
import torch

docs/source/en/quantization/overview.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,8 @@ If you are new to the quantization field, we recommend you to check out these be
3333
## When to use what?
3434

3535
Diffusers currently supports the following quantization methods.
36-
- [BitsandBytes]()
37-
- [TorchAO]()
38-
- [GGUF]()
36+
- [BitsandBytes](./bitsandbytes.md)
37+
- [TorchAO](./torchao.md)
38+
- [GGUF](./gguf.md)
3939

4040
[This resource](https://huggingface.co/docs/transformers/main/en/quantization/overview#when-to-use-what) provides a good overview of the pros and cons of different quantization techniques.

0 commit comments

Comments
 (0)