Skip to content

Commit 2a9330c

Browse files
authored
Update README.md (#21)
1 parent 0d40b99 commit 2a9330c

File tree

1 file changed

+8
-1
lines changed

1 file changed

+8
-1
lines changed

README.md

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,16 @@
11
# AutoFP8
22

3-
Open-source FP8 quantization library for producing compressed checkpoints for running in vLLM - see https://github.com/vllm-project/vllm/pull/4332 for details on the implementation for inference.
3+
Open-source FP8 quantization library for producing compressed checkpoints for running in vLLM - see https://github.com/vllm-project/vllm/pull/4332 for details on the implementation for inference. This library focuses on providing quantized weight, activation, and kv cache scales for FP8_E4M3 precision.
4+
5+
[FP8 Model Collection from Neural Magic](https://huggingface.co/collections/neuralmagic/fp8-llms-for-vllm-666742ed2b78b7ac8df13127) with many accurate (<1% accuracy drop) FP8 checkpoints ready for inference with vLLM.
46

57
> NOTE: AutoFP8 is in early beta and subject to change
68
9+
<p align="center">
10+
<img src="https://github.com/neuralmagic/AutoFP8/assets/3195154/c6bb9ddb-1bc9-48df-bf5f-9d7916dbd1f9" width="40%" />
11+
<img src="https://github.com/neuralmagic/AutoFP8/assets/3195154/2e30d4c0-340a-4527-8ff7-e8d48a8807ca" width="40%" />
12+
</p>
13+
714
## Installation
815

916
Clone this repo and install it from source:

0 commit comments

Comments
 (0)