You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+29-11Lines changed: 29 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,9 +2,36 @@
2
2
3
3
Open-source FP8 quantization project for producing compressed checkpoints for running in vLLM - see https://github.com/vllm-project/vllm/pull/4332 for implementation.
4
4
5
-
# How to run quantized models
5
+
##How to quantize a model
6
6
7
-
Install vLLM: `pip install vllm>=0.4.2`
7
+
Install this repo's requirements:
8
+
```bash
9
+
pip install -r requirements.txt
10
+
```
11
+
12
+
Command to produce a `Meta-Llama-3-8B-Instruct-FP8` quantized LLM:
[vLLM](https://github.com/vllm-project/vllm) has full support for FP8 models quantized with this package. Install vLLM with: `pip install vllm>=0.4.2`
8
35
9
36
Then simply pass the quantized checkpoint directly to vLLM's entrypoints! It will detect the checkpoint format using the `quantization_config` in the `config.json`.
0 commit comments