You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/guides/big_models_and_distributed/distributed_oneshot.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -59,7 +59,7 @@ ds = load_dataset(
59
59
60
60
### 4. Call your script with `torchrun` ###
61
61
62
-
Now, your script is ready to run using distributed processes. To start, simply run your script using `python3 -m torchrun --nproc_per_node=2 YOUR_EXAMPLE.py` to run with two GPU devices. For a complete example script, see [llama_ddp_example.py](/examples/quantization_w4a16/llama3_ddp_example.py). The below table shows results and speedups as of LLM Compressor v0.10.0, future changes will bring these numbers closer to linear speedups.
62
+
Now, your script is ready to run using distributed processes. To start, simply run your script using `torchrun --nproc_per_node=2 YOUR_EXAMPLE.py` to run with two GPU devices. For a complete example script, see [llama_ddp_example.py](https://github.com/vllm-project/llm-compressor/blob/main/examples/quantization_w4a16/llama3_ddp_example.py). The below table shows results and speedups as of LLM Compressor v0.10.0, future changes will bring these numbers closer to linear speedups.
`load_offloaded_model` context required? | No | No | No | Yes
19
19
Behavior | Try to load model onto all visible cuda devices. Fallback to cpu and disk if model too large | Try to load model onto first cuda device only. Error if model is too large | Try to load model onto cpu. Error if the model is too large | Try to load model onto cpu. Fallback to disk if model is too large
20
-
LLM Compressor Examples | This is the recommended load option when using the "basic" pipeline | | | This is the recommended load option when using the "sequential" pipeline
20
+
LLM Compressor Examples | This is the recommended load option when using the "basic" or "data_free" pipeline | | | This is the recommended load option when using the "sequential" pipeline
Behavior | Try to load model onto device 0, then broadcast replicas to other devices. Fallback to cpu and disk if model is too large | Try to load model onto device 0 only, then broadcast replicas to other devices. Error if model is too large | Try to load model onto cpu. Error if the model is too large | Try to load model onto cpu. Fallback to disk if model is too large
26
-
LLM Compressor Examples | This is the recommended load option when using the "basic" pipeline | | | This is the recommended load option when using the "sequential" pipeline
26
+
LLM Compressor Examples | This is the recommended load option when using the "basic" or "data_free" pipeline | | | This is the recommended load option when using the "sequential" pipeline
27
27
28
28
## Disk Offloading ##
29
29
When compressing models which are larger than the available CPU memory, it is recommended to utilize disk offloading for any weights which cannot fit on the cpu. To enable disk offloading, use the `load_offloaded_model` context from `compressed_tensors` to load your model, along with `device_map="auto_offload"`.
Copy file name to clipboardExpand all lines: docs/guides/big_models_and_distributed/sequential_onloading.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@
4
4
5
5
LLM Compressor is capable of compressing models much larger than the amount of memory available as VRAM. This is achieved through a technique called **sequential onloading** whereby only a fraction of the model weights are moved to GPU memory for calibration while the rest of the weights remain offloaded to CPU or disk. When performing calibration, the entire dataset is offloaded to CPU, then onloaded one batch at a time to reduce peak activations memory usage.
If basic calibration/inference is represented with the following pseudo code...
10
10
```python
@@ -22,20 +22,20 @@ for layer in model.layers:
22
22
23
23
## Implementation ##
24
24
25
-
Before a model can be sequentially onloaded, it must first be broken up into disjoint parts which can be individually onloaded. This is achieved through the [torch.fx.Tracer](https://github.com/pytorch/pytorch/blob/main/torch/fx/README.md#tracing) module, which allows a model to be represented as a graph of operations (nodes) and data inputs (edges). Once the model has been traced into a valid graph representation, the graph is cut (partitioned) into disjoint subgraphs, each of which is onloaded individually as a layer. This implementation can be found [here](/src/llmcompressor/pipelines/sequential/helpers.py).
25
+
Before a model can be sequentially onloaded, it must first be broken up into disjoint parts which can be individually onloaded. This is achieved through the [torch.fx.Tracer](https://github.com/pytorch/pytorch/blob/main/torch/fx/README.md#tracing) module, which allows a model to be represented as a graph of operations (nodes) and data inputs (edges). Once the model has been traced into a valid graph representation, the graph is cut (partitioned) into disjoint subgraphs, each of which is onloaded individually as a layer. This implementation can be found [here](https://github.com/vllm-project/llm-compressor/blob/main/src/llmcompressor/pipelines/sequential/helpers.py).
*This image depicts the sequential text decoder layers of the Llama3.2-Vision model. Each of the individual decoder layers is onloaded separately*
32
32
33
33
## Sequential Targets and Usage ##
34
34
You can use sequential onloading by calling `oneshot` with the `pipeline="sequential"` argument. Note that this pipeline is the default for all oneshot calls which require calibration data. If the sequential pipeline proves to be problematic, you can specify `pipeline="basic"` to use a basic pipeline which does not require sequential onloading, but only works performantly when the model is small enough to fit into the available VRAM.
35
35
36
36
If you are compressing a model using a GPU with a small amount of memory, you may need to change your sequential targets. Sequential targets control how many weights to onload to the GPU at a time. By default, the sequential targets are decoder layers which may include large MoE layers. In these cases, setting the `sequential_targets="Linear"` argument in `oneshot` will result in lower VRAM usage, but a longer runtime.
Copy file name to clipboardExpand all lines: docs/guides/memory.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ Also, larger models, like DeepSeek R1 use a large amount of CPU memory, and mode
17
17
18
18
2. How text decoder layers and vision tower layers are loaded on to GPU differs significantly.
19
19
20
-
In the case of text decoder layers, LLM Compressor typically loads one layer at a time into the GPU for computation, while the rest remains offloaded in CPU/Disk memory. For more information, see [Sequential Onloading](./sequential_onloading.md).
20
+
In the case of text decoder layers, LLM Compressor typically loads one layer at a time into the GPU for computation, while the rest remains offloaded in CPU/Disk memory. For more information, see [Sequential Onloading](./big_models_and_distributed/sequential_onloading.md).
21
21
22
22
However, vision tower layers are loaded onto GPU all at once. Unlike the text model, vision towers are not split up into individual layers before onloading to the GPU. This can create a GPU memory bottleneck for models whose vision towers are larger than their text layers.
Copy file name to clipboardExpand all lines: docs/steps/compress.md
+4-1Lines changed: 4 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,6 +14,9 @@ Before you begin, ensure that your environment meets the following prerequisites
14
14
LLM Compressor provides the `oneshot` API for simple and straightforward model compression. This API allows you to apply a recipe, which defines your chosen quantization scheme and quantization algorithm, to your selected model.
15
15
We'll import the `QuantizationModifier` modifier, which applies the RTN quantization algorithm and create a recipe to apply FP8 Block quantization to our model. The final model is compressed in the compressed-tensors format and ready to deploy in vLLM.
16
16
17
+
!!! info
18
+
Note: The following script is for single-process quantization. The model is loaded onto any available GPUs and then offloaded onto the cpu if it is too large. For distributed support or support for very large models (such as certain MoEs, including Kimi-K2), see the [Big Models and Distributed Support guide](../guides/big_models_and_distributed/model_loading.md).
19
+
17
20
```python
18
21
from transformers import AutoModelForCausalLM, AutoTokenizer
0 commit comments