|
1 | | -# Getting started with LLM Compressor docs |
| 1 | +# What is LLM Compressor? |
2 | 2 |
|
3 | | -```bash |
4 | | -cd docs |
5 | | -``` |
| 3 | +**LLM Compressor** is an easy-to-use library for optimizing large language models for deployment with vLLM. It provides a comprehensive toolkit for applying state-of-the-art compression algorithms to reduce model size, lower hardware requirements, and improve inference performance. |
6 | 4 |
|
7 | | -- Install the dependencies: |
| 5 | +<p align="center"> |
| 6 | + <img alt="LLM Compressor Flow" src="assets/llmcompressor-user-flows.png" width="100%" style="max-width: 100%;"/> |
| 7 | +</p> |
8 | 8 |
|
9 | | -```bash |
10 | | -make install |
11 | | -``` |
| 9 | +## Which challenges does LLM Compressor address? |
12 | 10 |
|
13 | | -- Clean the previous build (optional but recommended): |
| 11 | +Model optimization through quantization and pruning addresses the key challenges of deploying AI at scale: |
14 | 12 |
|
15 | | -```bash |
16 | | -make clean |
17 | | -``` |
| 13 | +| Challenge | How LLM Compressor helps | |
| 14 | +|-----------|--------------------------| |
| 15 | +| GPU and infrastructure costs | Reduces memory requirements by 50-75%, enabling deployment on fewer GPUs | |
| 16 | +| Response latency | Reduces data movement overhead because quantized weights load faster | |
| 17 | +| Request throughput | Utilizes lower-precision tensor cores for faster computation | |
| 18 | +| Energy consumption | Smaller models consume less power during inference | |
18 | 19 |
|
19 | | -- Serve the docs: |
| 20 | +For more information, see [Why use LLM Compressor?](./steps/why-llmcompressor.md) |
20 | 21 |
|
21 | | -```bash |
22 | | -make serve |
23 | | -``` |
| 22 | +## New in this release |
24 | 23 |
|
25 | | -This will start a local server at http://localhost:8000. You can now open your browser and view the documentation. |
| 24 | +Review the [LLM Compressor v0.9.0 release notes](https://github.com/vllm-project/llm-compressor/releases/tag/0.9.0) for details about new features. Highlights include: |
| 25 | + |
| 26 | +!!! info "Batched Calibration Support" |
| 27 | + LLM Compressor now supports calibration with batch sizes > 1. A new batch_size argument has been added to the dataset_arguments enabling the option to improve quantization speed. Default batch_size is currently set to 1 |
| 28 | + |
| 29 | +!!! info "New Model-Free PTQ Pathway" |
| 30 | + A new model-free PTQ pathway has been added to LLM Compressor, called model_free_ptq. This pathway allows you to quantize your model without the requirement of Hugging Face model definition and is especially useful in cases where oneshot may fail. This pathway is currently supported for data-free pathways only, such as FP8 quantization and was leveraged to quantize the Mistral Large 3 model. Additional examples have been added illustrating how LLM Compressor can be used for Kimi K2 |
| 31 | + |
| 32 | +!!! info "Extended KV Cache and Attention Quantization Support" |
| 33 | + LLM Compressor now supports attention quantization. KV Cache quantization, which previously only supported per-tensor scales, has been extended to support any quantization scheme including a new per-head quantization scheme. Support for these checkpoints is ongoing in vLLM and scripts to get started have been added to the [experimental](https://github.com/vllm-project/llm-compressor/tree/main/experimental) folder |
| 34 | + |
| 35 | +!!! info "Generalized AWQ Support" |
| 36 | + The `AWQModifier` has been updated to support quantization schemes beyond W4A16 (e.g., W4AFp8). In particular, AWQ no longer constrains that the quantization config needs to have the same settings for group_size, symmetric, and num_bits for each config_group |
| 37 | + |
| 38 | +!!! info "AutoRound Quantization Support" |
| 39 | + Added AutoRoundModifier for quantization using AutoRound, an advanced post-training algorithm that optimizes rounding and clipping ranges through sign-gradient descent. This approach combines the efficiency of post-training quantization with the adaptability of parameter tuning, delivering robust compression for large language models while maintaining strong performance |
| 40 | + |
| 41 | +!!! info "Experimental MXFP4 Support" |
| 42 | + Models can now be quantized using an MXFP4 pre-set scheme. Examples can be found under the experimental folder. This pathway is still experimental as support and validation with vLLM is still a WIP. |
| 43 | + |
| 44 | +## Supported algorithms and techniques |
| 45 | + |
| 46 | +| Algorithm | Description | Use Case | |
| 47 | +|-----------|-------------|----------| |
| 48 | +| **RTN** (Round-to-Nearest) | Fast baseline quantization | Quick compression with minimal setup | |
| 49 | +| **GPTQ** | Weighted quantization with calibration | High-accuracy 4 and 8 bit weight quantization | |
| 50 | +| **AWQ** | Activation-aware weight quantization | Preserves accuracy for important weights | |
| 51 | +| **SmoothQuant** | Outlier handling for W8A8 | Improved activation quantization | |
| 52 | +| **SparseGPT** | Pruning with quantization | 2:4 sparsity patterns | |
| 53 | +| **SpinQuant** | Rotation-based transforms | Improved low-bit accuracy | |
| 54 | +| **QuIP** | Incoherence processing | Advanced quantization preprocessing | |
| 55 | +| **FP8 KV Cache** | KV cache quantization | Long context inference on Hopper-class and newer GPUs | |
| 56 | +| **AutoRound** | Optimizes rounding and clipping ranges via sign-gradient descent | Broad compatibility | |
| 57 | + |
| 58 | +## Supported quantization schemes |
| 59 | + |
| 60 | +LLM Compressor supports applying multiple formats in a given model. |
| 61 | + |
| 62 | +| Format | Targets | Compute Capability | Use Case | |
| 63 | +|--------|---------|-------------------|----------| |
| 64 | +| **W4A16/W8A16** | Weights | 8.0 (Ampere and up) | Optimize for latency on older hardware | |
| 65 | +| **W8A8-INT8** | Weights and activations | 7.5 (Turing and up) | Balanced performance and compatibility | |
| 66 | +| **W8A8-FP8** | Weights and activations | 8.9 (Hopper and up) | High throughput on modern GPUs | |
| 67 | +| **NVFP4/MXFP4** | Weights and activations | 10.0 (Blackwell) | Maximum compression on latest hardware | |
| 68 | +| **W4AFP8** | Weights and activations | 8.9 (Hopper and up) | Low-bit weights with dynamic FP8 activations | |
| 69 | +| **W4AINT8** | Weights and activations | 7.5 (Turing and up) | Low-bit weights with dynamic INT8 activations | |
| 70 | +| **2:4 Sparse** | Weights | 8.0 (Ampere and up) | Sparsity-accelerated inference | |
| 71 | + |
| 72 | +!!! note |
| 73 | + Listed compute capability indicates the minimum architecture required for hardware acceleration. |
0 commit comments