You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,11 +6,11 @@ FMS Model Optimizer is a framework for developing reduced precision neural netwo
6
6
7
7
## Highlights
8
8
9
-
-**Python API to enable model quantization:** With addition of a few lines of codes, module-level and/or function-level operations replacement will be performed.
10
-
-**Robust:** Verified for INT 8/4/2-bit quantization on Vision/Speech/NLP/Object Detection/LLM
11
-
-**Flexible:**This package can analyze the network using PyTorch Dynamo, apply best practices, such as clip_val initialization, layer-level precision setting, optimizer param group setting, etc. Users can also easily customize any of the settings through a JSON config file, and even bypass the Dynamo tracing if preferred.
12
-
-**State-of-the-art INT and FP quantization techniques:**For weights and activations, such as SAWB+ and PACT+, comparable or better than other published works.
13
-
-**Supports key compute-intensive operations:** Conv2d, Linear, LSTM, MM, BMM
9
+
-**Python API to enable model quantization:** With the addition of a few lines of codes, module-level and/or function-level operations replacement will be performed.
10
+
-**Robust:** Verified for INT 8/4-bit quantization on important vision/speech/NLP/object detection LLMs
11
+
-**Flexible:**Options to analyze the network using PyTorch Dynamo, apply best practices, such as clip_val initialization, layer-level precision setting, optimizer param group setting, etc. during quantization.
12
+
-**State-of-the-art INT and FP quantization techniques**for weights and activations, such as SmoothQuant, SAWB+ and PACT+.
13
+
-**Supports key compute-intensive operations**like Conv2d, Linear, LSTM, MM and BMM
Copy file name to clipboardExpand all lines: examples/DQ_SQ/README.md
+29-29Lines changed: 29 additions & 29 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,10 +3,10 @@ Direct quantization enables the quantization of large language models (LLMs) wit
3
3
4
4
Here, we provide an example of direct quantization. In this case, we demonstrate DQ of `llama3-8b` model into INT8 and FP8 for weights, activations, and/or KV-cache. This example is referred to as the **experimental FP8** in the other [FP8 example](../FP8_QUANT/README.md), which means the quantization configurations and corresponding behavior can be studied this way, but the saved model cannot be directly served by `vllm` as the moment.
5
5
6
-
## Requirement
6
+
## Requirements
7
7
-[FMS Model Optimizer requirements](../../README.md#requirements)
8
8
9
-
## Steps
9
+
## Quickstart
10
10
11
11
**1. Prepare Data** for calibration process by converting into its tokenized form. An example of tokenization using `LLAMA-3-8B`'s tokenizer is below.
12
12
@@ -48,45 +48,45 @@ python -m fms_mo.run_quant \
48
48
**3. Compare the Perplexity score** For user convenience, the code will print out perplexity (controlled by `eval_ppl` flag) at the end of the run, so no additional steps needed (if the logging level is set to `INFO` in terminal). You can check output in the logging file. `./fms_mo.log`.
49
49
50
50
## Example Test Results
51
-
The perplexity of the INT8 and FP8 quantized models on the wikitext dataset is shown below:
51
+
The perplexity of the INT8 and FP8 quantized models on the `wikitext` dataset is shown below:
In large language models (LLMs), key/value pairs are frequently cached during token generation, a process known as KV caching, to prevent redundant computations due to the autoregressive nature of token generation. However, the size of the KV cache increases with both batch size and context length, which can slow down model inference due to the need to access a large amount of data in memory. Quantizing the KV cache effectively reduces this memory bandwidth limitation, improving inference speed. To study the quantization behavior of KV cache, we can simply set the nbits_kvcache argument to 8bit, then the KV cache will be quantized together with weights and activations. In addition, the `bmm1_qm1_mode`, `bmm1_qm2_mode`, and `bmm2_qm2_mode`[arguments](../../fms_mo/training_args.py) must be set to the same quantizer mode as `qa_mode`. **NOTE**: `bmm2_qm1_mode` should be kept as `minmax`.
62
+
In large language models (LLMs), key/value pairs are frequently cached during token generation, a process known as KV caching, to prevent redundant computations due to the autoregressive nature of token generation. However, the size of the KV cache increases with both batch size and context length, which can slow down model inference due to the need to access a large amount of data in memory. Quantizing the KV cache effectively reduces this memory bandwidth limitation, improving inference speed. To study the quantization behavior of KV cache, we can simply set the `nbits_kvcache` argument to 8-bit, then the KV cache will be quantized together with weights and activations. In addition, the `bmm1_qm1_mode`, `bmm1_qm2_mode`, and `bmm2_qm2_mode`[arguments](../../fms_mo/training_args.py) must be set to the same quantizer mode as `qa_mode`. **NOTE**: `bmm2_qm1_mode` should be kept as `minmax`.
63
63
64
-
The effect of setting the nbits_kvcache to 8 and its relevant code sections are:
64
+
The effect of setting the `nbits_kvcache` to 8 and its relevant code sections are:
65
65
66
66
- Enables eager attention for the quantization of attention operations, including KV cache.
67
-
```python
68
-
#for attention or kv-cache quantization, need to use eager attention
Copy file name to clipboardExpand all lines: examples/FP8_QUANT/README.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ There are two types of FP8 support in FMS Model Optimizer:
7
7
8
8
This is an example of mature FP8, which under the hood leverages some functionalities in [llm-compressor](https://github.com/vllm-project/llm-compressor), a third-party library, to perform FP8 quantization. An example for the experimental FP8 can be found [here](../DQ_SQ/README.md)
9
9
10
-
## Requirement
10
+
## Requirements
11
11
12
12
-[FMS Model Optimizer requirements](../../README.md#requirements)
13
13
- Nvidia A100 family or higher
@@ -16,16 +16,16 @@ This is an example of mature FP8, which under the hood leverages some functional
16
16
```bash
17
17
pip install llmcompressor
18
18
```
19
-
- To evaluate the FP8 quantized model, [lm-eval](https://github.com/EleutherAI/lm-evaluation-harness/tree/main) and [vllm](https://github.com/vllm-project/vllm) libraries are also required.
19
+
- To evaluate the FP8 quantized model, [lm-eval](https://github.com/EleutherAI/lm-evaluation-harness) and [vllm](https://github.com/vllm-project/vllm) libraries are also required.
20
20
```bash
21
-
pip install vllm lm_eval==0.4.3
21
+
pip install vllm lm_eval
22
22
```
23
23
24
24
> [!CAUTION]
25
25
>`vllm` may require a specific PyTorch version that is different from what is installed in your current environment and it may force install without asking. Make sure it's compatible with your settings or create a new environment if needed.
26
26
27
-
## Steps
28
-
Three simple steps to perform FP8 quantization using FMS Model Optimizer:
27
+
## Quickstart
28
+
This end-to-end example utilizes the common set of interfaces provided by `fms_mo` for easily applying multiple quantization algorithms with FP8 being the focus of this example. The steps involved are:
29
29
30
30
1. **FP8 quantization through CLI**. Other arguments could be found here [FP8Args](../../fms_mo/training_args.py#L84).
31
31
@@ -60,7 +60,7 @@ Three simple steps to perform FP8 quantization using FMS Model Optimizer:
60
60
> [!NOTE]
61
61
> FP16 model file size on storage is ~16.07 GB while FP8 is ~8.6 GB.
62
62
63
-
3. **Evaluate the quantized model** performance on a selected NLP task (lambada_openai) using [lm-eval](https://github.com/EleutherAI/lm-evaluation-harness/tree/main) library. The evaluation metrics on this task are perplexity and accuracy. The model will be run on GPU.
63
+
3. **Evaluate the quantized model**'s performance on a selected task using `lm-eval` library, the command below will run evaluation on [`lambada_openai`](https://huggingface.co/datasets/EleutherAI/lambada_openai) task and show the perplexity/accuracy at the end.
64
64
65
65
```bash
66
66
lm_eval --model vllm \
@@ -88,7 +88,7 @@ Three simple steps to perform FP8 quantization using FMS Model Optimizer:
88
88
|||none | 5|perplexity|↓ |3.8915|± |0.3727|
89
89
```
90
90
91
-
## Example Explained
91
+
## Code Walkthrough
92
92
93
93
1. The non-quantized pre-trained model is loaded using model wrapper from `llm-compressor`. The corresponding tokenizer is constructed as well.
Copy file name to clipboardExpand all lines: examples/GPTQ/README.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,14 +7,14 @@ For generative LLMs, very often the bottleneck of inference is no longer the com
7
7
8
8
-[FMS Model Optimizer requirements](../../README.md#requirements)
9
9
-`auto-gptq` is needed for this example. Use `pip install auto-gptq` or [install from source](https://github.com/AutoGPTQ/AutoGPTQ?tab=readme-ov-file#install-from-source)
10
-
- Optionally for the evaluation section below, install [lm-eval](https://github.com/EleutherAI/lm-evaluation-harness/tree/main)
- Optionally for the evaluation section below, install [lm-eval](https://github.com/EleutherAI/lm-evaluation-harness)
11
+
```
12
+
pip install lm-eval
13
+
```
14
14
15
15
16
16
## Quickstart
17
-
The end-to-end example utilizes the common set of interfaces provided by fms_mo for easily applying multiple quantization algorithms with GPTQ being the focus of this example. The steps involved are:
17
+
This end-to-end example utilizes the common set of interfaces provided by `fms_mo` for easily applying multiple quantization algorithms with GPTQ being the focus of this example. The steps involved are:
18
18
19
19
1. **Convert the dataset into its tokenized form.** An example of tokenization using `LLAMA-3-8B`'s tokenizer is below.
20
20
@@ -68,7 +68,7 @@ The end-to-end example utilizes the common set of interfaces provided by fms_mo
68
68
torch.int32 672 3521.904640
69
69
```
70
70
71
-
4. Further to **evaluate the quantized model**'s performance on a selected task using `lm-eval` library, the command below will run evaluation on [`lambada_openai`](https://huggingface.co/datasets/EleutherAI/lambada_openai) task and show the perplexity/accuracy at the end.
71
+
4. **Evaluate the quantized model**'s performance on a selected task using `lm-eval` library, the command below will run evaluation on [`lambada_openai`](https://huggingface.co/datasets/EleutherAI/lambada_openai) task and show the perplexity/accuracy at the end.
72
72
73
73
```bash
74
74
lm_eval --model hf \
@@ -79,7 +79,7 @@ The end-to-end example utilizes the common set of interfaces provided by fms_mo
79
79
--batch_size auto
80
80
```
81
81
82
-
## Summary of results
82
+
## Example Test Results
83
83
84
84
- Unquantized Model
85
85
```bash
@@ -98,20 +98,20 @@ The end-to-end example utilizes the common set of interfaces provided by fms_mo
98
98
```
99
99
100
100
101
-
- Quantized model with`desc_act`set to True (could improve the model quality, but at the cost of inference speed.)
101
+
- Quantized model with `desc_act` set to `True` (could improve the model quality, but at the cost of inference speed.)
> There are some randomness in generating the model and data, the resulting accuracy may vary ~$\pm$ 0.05.
109
+
> There is some randomness in generating the model and data, the resulting accuracy may vary ~$\pm$ 0.05.
110
110
111
111
112
112
## Code Walkthrough
113
113
114
-
1. Command line arguments will be used to create a GPTQ quantization config. (Information about the required arguments and their default values can be found in [fms_mo/training_args.py](../../fms_mo/training_args.py) )
114
+
1. Command line arguments will be used to create a GPTQ quantization config. Information about the required arguments and their default values can be found [here](../../fms_mo/training_args.py)
115
115
116
116
```python
117
117
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
@@ -122,7 +122,7 @@ The end-to-end example utilizes the common set of interfaces provided by fms_mo
122
122
damp_percent=gptq_args.damp_percent)
123
123
```
124
124
125
-
2. Load the pre_trained model with`auto_gptq`class/wrapper. (tokenizeris optional because we already tokenized the data in a previous step.)
125
+
2. Load the pre_trained model with`auto_gptq`class/wrapper. Tokenizeris optional because we already tokenized the data in a previous step.
Copy file name to clipboardExpand all lines: examples/PTQ_INT8/README.md
+10-8Lines changed: 10 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,13 +9,13 @@ This is an example of [block sequential PTQ](https://arxiv.org/abs/2102.05426).
9
9
## Requirements
10
10
11
11
-[FMS Model Optimizer requirements](../../README.md#requirements)
12
-
- The inferencing step requires Nvidia GPUs with compute capability > 8.0 (A100 family or higher).
12
+
- The inferencing step requires Nvidia GPUs with compute capability > 8.0 (A100 family or higher)
13
13
- NVIDIA cutlass package (Need to clone the source, not pip install). Preferrably place in user's home directory: `cd ~ && git clone https://github.com/NVIDIA/cutlass.git`
14
14
-[Ninja](https://ninja-build.org/)
15
15
-`PyTorch 2.3.1` (as newer version will cause issue for the custom CUDA kernel)
16
16
17
17
18
-
## Steps
18
+
## Quickstart
19
19
20
20
> [!NOTE]
21
21
> This example is based on the HuggingFace [Transformers Question answering example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering). Unlike our [QAT example](../QAT_INT8/README.md), which utilizes the training loop of the original code, our PTQ function will control the loop and the program will end before entering the original loop. Make sure the model doesn't get "tuned" twice!
Checkout [Example Test Results](#example-test-results) to compare against your results.
91
+
90
92
## Example Test Results
91
93
92
94
The table below shows results obtained for the conditions listed:
@@ -104,13 +106,13 @@ The table below shows results obtained for the conditions listed:
104
106
`Nouterloop` and `ptq_nbatch` are PTQ specific hyper-parameter.
105
107
Above experiments were run on v100 machine.
106
108
107
-
## Example Explained
109
+
## Code Walkthrough
108
110
109
111
In this section, we will deep dive into what happens during the example steps.
110
112
111
113
There are three parts to the example:
112
114
113
-
**1. Fine-tuned a model** with 16-bit floating point (FP16) precision:
115
+
**1. Fine-tune a model with 16-bit floating point (FP16) precision**
114
116
115
117
Fine-tunes a BERT model on the question answering dataset, SQuAD. This step is based on the HuggingFace [Transformers Question answering example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering). It was modified to collect additional training information in case we would like to tweak the hyper-parameters later.
116
118
@@ -124,7 +126,7 @@ In a nutshell, PTQ simply quantizes the weight and activation tensors in a block
124
126
from fms_mo import qmodel_prep, qconfig_init
125
127
126
128
# Create a config dict using a default recipe and CLI args
127
-
#if same item exists in both, args take precedence over recipe.
129
+
#If same item exists in both, args take precedence over recipe.
0 commit comments