Skip to content
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 34 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
---
weight: -4
---

# LLM Compressor Examples

The LLM Compressor examples are organized primarily by quantization scheme. Each folder contains model-specific examples showing how to apply that quantization scheme to a particular model.

Some examples are additionally grouped by model type, such as:
- `multimodal_audio`
- `multimodal_vision`
- `quantizing_moe`

Other examples are grouped by algorithm, such as:
- `awq`
- `autoround`

## How to find the right example

- If you are interested in quantizing a specific model, start by browsing the model-type folders (for example, `multimodal_audio`, `multimodal_vision`, or `quantizing_moe`).
- If you don’t see your model there, decide which quantization scheme you want to use (e.g., FP8, FP4, INT4, INT8, or KV cache / attention quantization) and look in the corresponding `quantization_***` folder.
- Each quantization scheme folder contains at least one LLaMA 3 example, which can be used as a general reference for other models.

## Where to start if you’re unsure

If you’re unsure which quantization scheme to use, a good starting point is a data-free pathway, such as `w8a8_fp8`, found under `quantization_w8a8_fp8`. For more details on available schemes and when to use them, see the
[Compression Schemes guide](../guides/compression_schemes.md).

## Need help?

If you don’t see your model or aren’t sure which quantization scheme applies, feel free to open an issue and someone from the community will be happy to help.

!!! note
We are currently updating and improving our documentation and examples structure. Feedback is very welcome during this transition.
2 changes: 1 addition & 1 deletion examples/awq/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Quantizing Models with Activation-Aware Quantization (AWQ) #
# AWQ Quantization #

Activation Aware Quantization (AWQ) is a state-of-the-art technique to quantize the weights of large language models which involves using a small calibration dataset to calibrate the model. The AWQ algorithm utilizes calibration data to derive scaling factors which reduce the dynamic range of weights while minimizing accuracy loss to the most salient weight values.

Expand Down
3 changes: 2 additions & 1 deletion examples/big_models_with_sequential_onloading/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Big Modeling with Sequential Onloading #
# Big Model Quantization with Sequential Onloading

## What is Sequential Onloading? ##
Sequential onloading is a memory-efficient approach for compressing large language models (LLMs) using only a single GPU. Instead of loading the entire model into memory—which can easily require hundreds of gigabytes—this method loads and compresses one layer at a time. The outputs are offloaded before the next layer is processed, dramatically reducing peak memory usage while maintaining high compression fidelity.

Expand Down
2 changes: 1 addition & 1 deletion examples/model_free_ptq/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Quantizing models without a model definition
# Model-free Quantization

`model_free_ptq` provides a PTQ pathway for data-free schemes (such for FP8 Dynamic Per Token or FP8 Block). Specifically, this pathway removes the requirement for a model definition or the need to load the model through transformers. If you are interested in applying a data-free scheme, there are two key scenarios in which applying this pathway may make sense for your model:

Expand Down
2 changes: 1 addition & 1 deletion examples/multimodal_audio/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Quantizing Multimodal Audio Models #
# Multimodal Audio Model Quantization

https://github.com/user-attachments/assets/6732c60b-1ebe-4bed-b409-c16c4415dff5

Expand Down
2 changes: 1 addition & 1 deletion examples/multimodal_vision/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Quantizing Multimodal Vision-Language Models #
# Multimodal Vision-Language Quantization #

<p align="center" style="text-align: center;">
<img src=http://images.cocodataset.org/train2017/000000231895.jpg alt="sample image from MS COCO dataset"/>
Expand Down

This file was deleted.

32 changes: 0 additions & 32 deletions examples/quantization_2of4_sparse_w4a16/2of4_w4a16_recipe.yaml

This file was deleted.

131 changes: 0 additions & 131 deletions examples/quantization_2of4_sparse_w4a16/README.md

This file was deleted.

77 changes: 0 additions & 77 deletions examples/quantization_2of4_sparse_w4a16/llama7b_sparse_w4a16.py

This file was deleted.

4 changes: 2 additions & 2 deletions examples/quantization_kv_cache/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# `fp8` Weight, Activation, and KV Cache Quantization
# KV Cache Quantization

`llmcompressor` now supports quantizing weights, activations, and KV cache to `fp8` for memory savings and inference acceleration with `vllm`.
`llmcompressor` supports quantizing `fp8` KV Cache for memory savings and inference acceleration with `vllm`.

> `fp8` computation is supported on NVIDIA GPUs with compute capability > 8.9 (Ada Lovelace, Hopper).
Expand Down
15 changes: 3 additions & 12 deletions examples/quantization_w4a4_fp4/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# `fp4` Quantization
# `fp4` Quantization with NVFP4

For weight-only FP4 quantization (e.g MXFP4A16, NVFP4A16) see examples [here](../quantization_w4a16_fp4/).

`llm-compressor` supports quantizing weights and activations to `fp4` for memory savings and inference acceleration with `vLLM`. In particular, `nvfp4` is supported - a 4-bit floating point encoding format introduced with the NVIDIA Blackwell GPU architecture.

Expand Down Expand Up @@ -81,14 +83,3 @@ tokenizer.save_pretrained(SAVE_DIR)
```

We have successfully created an `nvfp4` model!

# Quantizing MoEs

To quantize MoEs, MoE calibration is now handled automatically by the pipeline. An example quantizing Llama4 can be found under `llama4_example.py`. The pipeline automatically applies the appropriate MoE calibration context which:

1. Linearizes the model to enable quantization and execution in vLLM. This is required as the native model definition does not include `torch.nn.Linear` layers in its MoE blocks, a requirement for LLM Compressor to run quantization.
2. Ensures experts are quantized correctly as not all experts are activated during calibration

Similarly, an example quantizing the Qwen3-30B-A3B model can be found under `qwen_30b_a3b.py`. This model uses contextual MoE calibration which temporarily updates the model definition to use `Qwen3MoeSparseMoeBlock` which updates how the forward pass is handled in the MoE block during calibration. Feel free to update the definition under `llm-compressor/src/llmcompressor/modeling/qwen3_moe.py` to play around with this behavior and evaluate its impact on quantization performance.


Loading
Loading