You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The LLM Compressor examples are organized primarily by quantization scheme. Each folder contains model-specific examples showing how to apply that quantization scheme to a particular model.
8
+
9
+
Some examples are additionally grouped by model type, such as:
10
+
-`multimodal_audio`
11
+
-`multimodal_vision`
12
+
-`quantizing_moe`
13
+
14
+
Other examples are grouped by algorithm, such as:
15
+
-`awq`
16
+
-`autoround`
17
+
18
+
## How to find the right example
19
+
20
+
- If you are interested in quantizing a specific model, start by browsing the model-type folders (for example, `multimodal_audio`, `multimodal_vision`, or `quantizing_moe`).
21
+
- If you don’t see your model there, decide which quantization scheme you want to use (e.g., FP8, FP4, INT4, INT8, or KV cache / attention quantization) and look in the corresponding `quantization_***` folder.
22
+
- Each quantization scheme folder contains at least one LLaMA 3 example, which can be used as a general reference for other models.
23
+
24
+
## Where to start if you’re unsure
25
+
26
+
If you’re unsure which quantization scheme to use, a good starting point is a data-free pathway, such as `w8a8_fp8`, found under `quantization_w8a8_fp8`. For more details on available schemes and when to use them, see the Compression Schemes [guide](https://docs.vllm.ai/projects/llm-compressor/en/latest/guides/compression_schemes/).
27
+
28
+
## Need help?
29
+
30
+
If you don’t see your model or aren’t sure which quantization scheme applies, feel free to open an issue and someone from the community will be happy to help.
31
+
32
+
!!! note
33
+
We are currently updating and improving our documentation and examples structure. Feedback is very welcome during this transition.
Copy file name to clipboardExpand all lines: examples/awq/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# Quantizing Models with Activation-Aware Quantization (AWQ) #
1
+
# AWQ Quantization #
2
2
3
3
Activation Aware Quantization (AWQ) is a state-of-the-art technique to quantize the weights of large language models which involves using a small calibration dataset to calibrate the model. The AWQ algorithm utilizes calibration data to derive scaling factors which reduce the dynamic range of weights while minimizing accuracy loss to the most salient weight values.
Copy file name to clipboardExpand all lines: examples/big_models_with_sequential_onloading/README.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,5 @@
1
-
# Big Modeling with Sequential Onloading #
1
+
# Big Model Quantization with Sequential Onloading
2
+
2
3
## What is Sequential Onloading? ##
3
4
Sequential onloading is a memory-efficient approach for compressing large language models (LLMs) using only a single GPU. Instead of loading the entire model into memory—which can easily require hundreds of gigabytes—this method loads and compresses one layer at a time. The outputs are offloaded before the next layer is processed, dramatically reducing peak memory usage while maintaining high compression fidelity.
Copy file name to clipboardExpand all lines: examples/model_free_ptq/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# Quantizing models without a model definition
1
+
# Model-free Quantization
2
2
3
3
`model_free_ptq` provides a PTQ pathway for data-free schemes (such for FP8 Dynamic Per Token or FP8 Block). Specifically, this pathway removes the requirement for a model definition or the need to load the model through transformers. If you are interested in applying a data-free scheme, there are two key scenarios in which applying this pathway may make sense for your model:
0 commit comments