llmcompressor
is an easy-to-use library for optimizing models for deployment with vllm
, including:
- Comprehensive set of quantization algorithms for weight-only and activation quantization
- Seamless integration with Hugging Face models and repositories
safetensors
-based file format compatible withvllm
- Large model support via
accelerate
β¨ Read the announcement blog here! β¨
π¬ Join us on the vLLM Community Slack and share your questions, thoughts, or ideas in:
#sig-quantization
#llm-compressor
Big updates have landed in LLM Compressor! To get a more in-depth look, check out the deep-dive.
Some of the exciting new features include:
- Qwen3 Next and Qwen3 VL MoE Quantization Support: Quantize the Qwen3 Next and Qwen3 VL MoE models and seamlessly run the models in vLLM. Examples for NVFP4 and FP8 Quantization have been added for the Qwen3-Next-80B-A3B-Instruct. For the Qwen3 VL MoE, support has been added for the datafree pathway, specifically FP8 Quantization (e.g channel-wise and block-wise quantization). NOTE: these models are not supported in tranformers<=4.56.2. You may need to install transformers from source.
- Quantization with Multiple Modifiers: Multiple quantization modifiers can now be applied to the same model for mixed-precision quantization, for example applying AWQ W4A16 to a model's
self_attn
layers and GPTQ W8A8 to itsmlp
layers. This is an advanced usage ofllm-compressor
and an active area of research. See the non-uniform quantization support section for more detail and example usage. - QuIP and SpinQuant-style Transforms: The newly added
QuIPModifier
andSpinQuantModifier
allow users to quantize their models after injecting hadamard weights into the computation graph, reducing quantization error and greatly improving accuracy recovery for low bit weight and activation quantization. - DeepSeekV3-style Block Quantization Support: This allows for more efficient compression of large language models without needing a calibration dataset. Quantize a Qwen3 model to W8A8.
- Llama4 Quantization Support: Quantize a Llama4 model to W4A16 or NVFP4. The checkpoint produced can seamlessly run in vLLM.
- FP4 Quantization - now with MoE and non-uniform support: Quantize weights and activations to FP4 and seamlessly run the compressed model in vLLM. Model weights and activations are quantized following the NVFP4 configuration. See examples of fp4 activation support, MoE support, and Non-uniform quantization support where some layers are selectively quantized to fp8 for better recovery. You can also mix other quantization schemes, such as int8 and int4.
- Activation Quantization: W8A8 (int8 and fp8)
- Mixed Precision: W4A16, W8A16, NVFP4 (W4A4 and W4A16 support)
- 2:4 Semi-structured and Unstructured Sparsity
- Simple PTQ
- GPTQ
- AWQ
- SmoothQuant
- SparseGPT
Please refer to compression_schemes.md for detailed information about available optimization schemes and their use cases.
pip install llmcompressor
Applying quantization with llmcompressor
:
- Activation quantization to
int8
- Activation quantization to
fp8
- Activation quantization to
fp4
- Weight only quantization to
fp4
- Weight only quantization to
int4
using GPTQ - Weight only quantization to
int4
using AWQ - Quantizing MoE LLMs
- Quantizing Vision-Language Models
- Quantizing Audio-Language Models
- Quantizing Models Non-uniformly
Deep dives into advanced usage of llmcompressor
:
Let's quantize TinyLlama
with 8 bit weights and activations using the GPTQ
and SmoothQuant
algorithms.
Note that the model can be swapped for a local or remote HF-compatible checkpoint and the recipe
may be changed to target different quantization algorithms or formats.
Quantization is applied by selecting an algorithm and calling the oneshot
API.
from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor import oneshot
# Select quantization algorithm. In this case, we:
# * apply SmoothQuant to make the activations easier to quantize
# * quantize the weights to int8 with GPTQ (static per channel)
# * quantize the activations to int8 (dynamic per token)
recipe = [
SmoothQuantModifier(smoothing_strength=0.8),
GPTQModifier(scheme="W8A8", targets="Linear", ignore=["lm_head"]),
]
# Apply quantization using the built in open_platypus dataset.
# * See examples for demos showing how to pass a custom calibration set
oneshot(
model="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
dataset="open_platypus",
recipe=recipe,
output_dir="TinyLlama-1.1B-Chat-v1.0-INT8",
max_seq_length=2048,
num_calibration_samples=512,
)
The checkpoints created by llmcompressor
can be loaded and run in vllm
:
Install:
pip install vllm
Run:
from vllm import LLM
model = LLM("TinyLlama-1.1B-Chat-v1.0-INT8")
output = model.generate("My name is")
- If you have any questions or requests open an issue and we will add an example or documentation.
- We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! Learn how here.
If you find LLM Compressor useful in your research or projects, please consider citing it:
@software{llmcompressor2024,
title={{LLM Compressor}},
author={Red Hat AI and vLLM Project},
year={2024},
month={8},
url={https://github.com/vllm-project/llm-compressor},
}