Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion docs/.nav.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,11 @@ nav:
- Memory Requirements: guides/memory.md
- Runtime Performance: guides/runtime.md
- Examples:
- examples/index.md
- examples/README.md
- examples/*
- Experimental:
- experimental/README.md
- experimental/*
- Developer:
- developer/index.md
- developer/*
Expand Down
2 changes: 1 addition & 1 deletion docs/api/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,4 @@ oneshot(
```

For advanced usage, you can configure individual modifiers and apply them directly to models.
See the [Examples](../examples/index.md) section for detailed usage patterns.
See the [Examples](https://github.com/vllm-project/llm-compressor/tree/main/examples) section for detailed usage patterns.
5 changes: 0 additions & 5 deletions docs/examples/index.md

This file was deleted.

45 changes: 45 additions & 0 deletions docs/scripts/gen_files.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,16 @@ def migrate_examples():
examples_path = project_root / "examples"
files = []

# Add the main examples README.md
main_readme = examples_path / "README.md"
if main_readme.exists():
files.append(
ProcessFile(
root_path=main_readme.relative_to(project_root),
docs_path=Path("examples/README.md"),
)
)

# Find all README.md files 2 levels down (examples/EXAMPLE_NAME/README.md)
for example_dir in examples_path.iterdir():
if (
Expand All @@ -101,6 +111,40 @@ def migrate_examples():
process_files(files, project_root)


def migrate_experimental():
project_root = find_project_root()
experimental_path = project_root / "experimental"
files = []

# Add the main experimental README.md
main_readme = experimental_path / "README.md"
if main_readme.exists():
files.append(
ProcessFile(
root_path=main_readme.relative_to(project_root),
docs_path=Path("experimental/README.md"),
)
)

# Find all README.md files 2 levels down (experimental/EXPERIMENTAL_NAME/README.md)
for experimental_dir in experimental_path.iterdir():
if (
not experimental_dir.is_dir()
or not (readme_path := experimental_dir / "README.md").exists()
):
continue

experimental_name = experimental_dir.name
files.append(
ProcessFile(
root_path=readme_path.relative_to(project_root),
docs_path=Path(f"experimental/{experimental_name}.md"),
)
)

process_files(files, project_root)


def migrate_readme_to_index():
"""Copy README.md files to index.md for MkDocs compatibility.

Expand All @@ -127,4 +171,5 @@ def migrate_readme_to_index():

migrate_developer_docs()
migrate_examples()
migrate_experimental()
migrate_readme_to_index()
33 changes: 33 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
---
weight: -4
---

# LLM Compressor Examples

The LLM Compressor examples are organized primarily by quantization scheme. Each folder contains model-specific examples showing how to apply that quantization scheme to a particular model.

Some examples are additionally grouped by model type, such as:
- `multimodal_audio`
- `multimodal_vision`
- `quantizing_moe`

Other examples are grouped by algorithm, such as:
- `awq`
- `autoround`

## How to find the right example

- If you are interested in quantizing a specific model, start by browsing the model-type folders (for example, `multimodal_audio`, `multimodal_vision`, or `quantizing_moe`).
- If you don’t see your model there, decide which quantization scheme you want to use (e.g., FP8, FP4, INT4, INT8, or KV cache / attention quantization) and look in the corresponding `quantization_***` folder.
- Each quantization scheme folder contains at least one LLaMA 3 example, which can be used as a general reference for other models.

## Where to start if you’re unsure

If you’re unsure which quantization scheme to use, a good starting point is a data-free pathway, such as `w8a8_fp8`, found under `quantization_w8a8_fp8`. For more details on available schemes and when to use them, see the Compression Schemes [guide](https://docs.vllm.ai/projects/llm-compressor/en/latest/guides/compression_schemes/).

## Need help?

If you don’t see your model or aren’t sure which quantization scheme applies, feel free to open an issue and someone from the community will be happy to help.

!!! note
We are currently updating and improving our documentation and examples structure. Feedback is very welcome during this transition.
2 changes: 1 addition & 1 deletion examples/awq/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Quantizing Models with Activation-Aware Quantization (AWQ) #
# AWQ Quantization #

Activation Aware Quantization (AWQ) is a state-of-the-art technique to quantize the weights of large language models which involves using a small calibration dataset to calibrate the model. The AWQ algorithm utilizes calibration data to derive scaling factors which reduce the dynamic range of weights while minimizing accuracy loss to the most salient weight values.

Expand Down
3 changes: 2 additions & 1 deletion examples/big_models_with_sequential_onloading/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Big Modeling with Sequential Onloading #
# Big Model Quantization with Sequential Onloading

## What is Sequential Onloading? ##
Sequential onloading is a memory-efficient approach for compressing large language models (LLMs) using only a single GPU. Instead of loading the entire model into memory—which can easily require hundreds of gigabytes—this method loads and compresses one layer at a time. The outputs are offloaded before the next layer is processed, dramatically reducing peak memory usage while maintaining high compression fidelity.

Expand Down
2 changes: 1 addition & 1 deletion examples/model_free_ptq/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Quantizing models without a model definition
# Model-free Quantization

`model_free_ptq` provides a PTQ pathway for data-free schemes (such for FP8 Dynamic Per Token or FP8 Block). Specifically, this pathway removes the requirement for a model definition or the need to load the model through transformers. If you are interested in applying a data-free scheme, there are two key scenarios in which applying this pathway may make sense for your model:

Expand Down
2 changes: 1 addition & 1 deletion examples/multimodal_audio/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Quantizing Multimodal Audio Models #
# Multimodal Audio Model Quantization

https://github.com/user-attachments/assets/6732c60b-1ebe-4bed-b409-c16c4415dff5

Expand Down
2 changes: 1 addition & 1 deletion examples/multimodal_vision/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Quantizing Multimodal Vision-Language Models #
# Multimodal Vision-Language Quantization #

<p align="center" style="text-align: center;">
<img src=http://images.cocodataset.org/train2017/000000231895.jpg alt="sample image from MS COCO dataset"/>
Expand Down

This file was deleted.

32 changes: 0 additions & 32 deletions examples/quantization_2of4_sparse_w4a16/2of4_w4a16_recipe.yaml

This file was deleted.

131 changes: 0 additions & 131 deletions examples/quantization_2of4_sparse_w4a16/README.md

This file was deleted.

Loading