You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,11 +6,11 @@ FMS Model Optimizer is a framework for developing reduced precision neural netwo
6
6
7
7
## Highlights
8
8
9
-
-**Python API to enable model quantization:** With addition of a few lines of codes, module-level and/or function-level operations replacement will be performed.
10
-
-**Robust:** Verified for INT 8/4/2-bit quantization on Vision/Speech/NLP/Object Detection/LLM
11
-
-**Flexible:**This package can analyze the network using PyTorch Dynamo, apply best practices, such as clip_val initialization, layer-level precision setting, optimizer param group setting, etc. Users can also easily customize any of the settings through a JSON config file, and even bypass the Dynamo tracing if preferred.
12
-
-**State-of-the-art INT and FP quantization techniques:**For weights and activations, such as SAWB+ and PACT+, comparable or better than other published works.
13
-
-**Supports key compute-intensive operations:** Conv2d, Linear, LSTM, MM, BMM
9
+
-**Python API to enable model quantization:** With the addition of a few lines of codes, module-level and/or function-level operations replacement will be performed.
10
+
-**Robust:** Verified for INT 8/4-bit quantization on important vision/speech/NLP/object detection/LLMs.
11
+
-**Flexible:**Options to analyze the network using PyTorch Dynamo, apply best practices, such as clip_val initialization, layer-level precision setting, optimizer param group setting, etc. during quantization.
12
+
-**State-of-the-art INT and FP quantization techniques**for weights and activations, such as SmoothQuant, SAWB+ and PACT+.
13
+
-**Supports key compute-intensive operations**like Conv2d, Linear, LSTM, MM and BMM
Copy file name to clipboardExpand all lines: examples/DQ_SQ/README.md
+30-30Lines changed: 30 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,10 +3,10 @@ Direct quantization enables the quantization of large language models (LLMs) wit
3
3
4
4
Here, we provide an example of direct quantization. In this case, we demonstrate DQ of `llama3-8b` model into INT8 and FP8 for weights, activations, and/or KV-cache. This example is referred to as the **experimental FP8** in the other [FP8 example](../FP8_QUANT/README.md), which means the quantization configurations and corresponding behavior can be studied this way, but the saved model cannot be directly served by `vllm` as the moment.
5
5
6
-
## Requirement
6
+
## Requirements
7
7
-[FMS Model Optimizer requirements](../../README.md#requirements)
8
8
9
-
## Steps
9
+
## Quickstart
10
10
11
11
**1. Prepare Data** for calibration process by converting into its tokenized form. An example of tokenization using `LLAMA-3-8B`'s tokenizer is below.
> - Users should provide a tokentized data file based on their need. This is just one example to demonstrate what data format `fms_mo` is expecting.
23
+
> - Users should provide a tokenized data file based on their need. This is just one example to demonstrate what data format `fms_mo` is expecting.
24
24
> - Tokenized data will be saved in `<path_to_save>_train` and `<path_to_save>_test`
25
25
> - If you have trouble downloading Llama family of models from Hugging Face ([LLama models require access](https://www.llama.com/docs/getting-the-models/hugging-face/)), you can use `ibm-granite/granite-8b-code` instead
26
26
@@ -48,45 +48,45 @@ python -m fms_mo.run_quant \
48
48
**3. Compare the Perplexity score** For user convenience, the code will print out perplexity (controlled by `eval_ppl` flag) at the end of the run, so no additional steps needed (if the logging level is set to `INFO` in terminal). You can check output in the logging file. `./fms_mo.log`.
49
49
50
50
## Example Test Results
51
-
The perplexity of the INT8 and FP8 quantized models on the wikitext dataset is shown below:
51
+
The perplexity of the INT8 and FP8 quantized models on the `wikitext` dataset is shown below:
In large language models (LLMs), key/value pairs are frequently cached during token generation, a process known as KV caching, to prevent redundant computations due to the autoregressive nature of token generation. However, the size of the KV cache increases with both batch size and context length, which can slow down model inference due to the need to access a large amount of data in memory. Quantizing the KV cache effectively reduces this memory bandwidth limitation, improving inference speed. To study the quantization behavior of KV cache, we can simply set the nbits_kvcache argument to 8bit, then the KV cache will be quantized together with weights and activations. In addition, the `bmm1_qm1_mode`, `bmm1_qm2_mode`, and `bmm2_qm2_mode`[arguments](../../fms_mo/training_args.py) must be set to the same quantizer mode as `qa_mode`. **NOTE**: `bmm2_qm1_mode` should be kept as `minmax`.
62
+
In large language models (LLMs), key/value pairs are frequently cached during token generation, a process known as KV caching, to prevent redundant computations due to the autoregressive nature of token generation. However, the size of the KV cache increases with both batch size and context length, which can slow down model inference due to the need to access a large amount of data in memory. Quantizing the KV cache effectively reduces this memory bandwidth limitation, improving inference speed. To study the quantization behavior of KV cache, we can simply set the `nbits_kvcache` argument to 8-bit, then the KV cache will be quantized together with weights and activations. In addition, the `bmm1_qm1_mode`, `bmm1_qm2_mode`, and `bmm2_qm2_mode`[arguments](../../fms_mo/training_args.py) must be set to the same quantizer mode as `qa_mode`. **NOTE**: `bmm2_qm1_mode` should be kept as `minmax`.
63
63
64
-
The effect of setting the nbits_kvcache to 8 and its relevant code sections are:
64
+
The effect of setting the `nbits_kvcache` to 8 and its relevant code sections are:
65
65
66
66
- Enables eager attention for the quantization of attention operations, including KV cache.
67
-
```python
68
-
#for attention or kv-cache quantization, need to use eager attention
Copy file name to clipboardExpand all lines: examples/FP8_QUANT/README.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,25 +7,25 @@ There are two types of FP8 support in FMS Model Optimizer:
7
7
8
8
This is an example of mature FP8, which under the hood leverages some functionalities in [llm-compressor](https://github.com/vllm-project/llm-compressor), a third-party library, to perform FP8 quantization. An example for the experimental FP8 can be found [here](../DQ_SQ/README.md)
9
9
10
-
## Requirement
10
+
## Requirements
11
11
12
-
- FMS Model Optimizer requirements](../../README.md#requirements)
12
+
-[FMS Model Optimizer requirements](../../README.md#requirements)
13
13
- Nvidia A100 family or higher
14
14
- The [llm-compressor](https://github.com/vllm-project/llm-compressor) library can be installed using pip:
15
15
16
16
```bash
17
17
pip install llmcompressor
18
18
```
19
-
- To evaluate the FP8 quantized model, [lm-eval](https://github.com/EleutherAI/lm-evaluation-harness/tree/main) and [vllm](https://github.com/vllm-project/vllm) libraries are also required.
19
+
- To evaluate the FP8 quantized model, [lm-eval](https://github.com/EleutherAI/lm-evaluation-harness) and [vllm](https://github.com/vllm-project/vllm) libraries are also required.
20
20
```bash
21
-
pip install vllm lm_eval==0.4.3
21
+
pip install vllm lm_eval
22
22
```
23
23
24
24
> [!CAUTION]
25
25
>`vllm` may require a specific PyTorch version that is different from what is installed in your current environment and it may force install without asking. Make sure it's compatible with your settings or create a new environment if needed.
26
26
27
-
## Steps
28
-
Three simple steps to perform FP8 quantization using FMS Model Optimizer:
27
+
## Quickstart
28
+
This end-to-end example utilizes the common set of interfaces provided by `fms_mo` for easily applying multiple quantization algorithms with FP8 being the focus of this example. The steps involved are:
29
29
30
30
1. **FP8 quantization through CLI**. Other arguments could be found here [FP8Args](../../fms_mo/training_args.py#L84).
31
31
@@ -38,7 +38,7 @@ Three simple steps to perform FP8 quantization using FMS Model Optimizer:
38
38
```
39
39
40
40
> [!NOTE]
41
-
> - The quantized model and tokenizer will be saved to `output_dir`, but some additional temperary storage space may be needed.
41
+
> - The quantized model and tokenizer will be saved to `output_dir`, but some additional temporary storage space may be needed.
42
42
> - Runtime ~ 1 min on A100. (model download time not included)
43
43
> - If you have trouble downloading Llama family of models from Hugging Face ([LLama models require access](https://www.llama.com/docs/getting-the-models/hugging-face/)), you can use `ibm-granite/granite-3.0-8b-instruct` instead
44
44
@@ -60,7 +60,7 @@ Three simple steps to perform FP8 quantization using FMS Model Optimizer:
60
60
> [!NOTE]
61
61
> FP16 model file size on storage is ~16.07 GB while FP8 is ~8.6 GB.
62
62
63
-
3. **Evaluate the quantized model** performance on a selected NLP task (lambada_openai) using [lm-eval](https://github.com/EleutherAI/lm-evaluation-harness/tree/main) library. The evaluation metrics on this task are perplexity and accuracy. The model will be run on GPU.
63
+
3. **Evaluate the quantized model**'s performance on a selected task using `lm-eval` library, the command below will run evaluation on [`lambada_openai`](https://huggingface.co/datasets/EleutherAI/lambada_openai) task and show the perplexity/accuracy at the end.
64
64
65
65
```bash
66
66
lm_eval --model vllm \
@@ -88,7 +88,7 @@ Three simple steps to perform FP8 quantization using FMS Model Optimizer:
88
88
|||none | 5|perplexity|↓ |3.8915|± |0.3727|
89
89
```
90
90
91
-
## Example Explained
91
+
## Code Walkthrough
92
92
93
93
1. The non-quantized pre-trained model is loaded using model wrapper from `llm-compressor`. The corresponding tokenizer is constructed as well.
Copy file name to clipboardExpand all lines: examples/GPTQ/README.md
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,16 +5,16 @@ For generative LLMs, very often the bottleneck of inference is no longer the com
5
5
6
6
## Requirements
7
7
8
-
- FMS Model Optimizer requirements](../../README.md#requirements)
8
+
-[FMS Model Optimizer requirements](../../README.md#requirements)
9
9
-`auto-gptq` is needed for this example. Use `pip install auto-gptq` or [install from source](https://github.com/AutoGPTQ/AutoGPTQ?tab=readme-ov-file#install-from-source)
10
-
- Optionally for the evaluation section below, install [lm-eval](https://github.com/EleutherAI/lm-evaluation-harness/tree/main)
- Optionally for the evaluation section below, install [lm-eval](https://github.com/EleutherAI/lm-evaluation-harness)
11
+
```
12
+
pip install lm-eval
13
+
```
14
14
15
15
16
16
## Quickstart
17
-
The end-to-end example utilizes the common set of interfaces provided by fms_mo for easily applying multiple quantization algorithms with GPTQ being the focus of this example. The steps involved are:
17
+
This end-to-end example utilizes the common set of interfaces provided by `fms_mo` for easily applying multiple quantization algorithms with GPTQ being the focus of this example. The steps involved are:
18
18
19
19
1. **Convert the dataset into its tokenized form.** An example of tokenization using `LLAMA-3-8B`'s tokenizer is below.
20
20
@@ -28,7 +28,7 @@ The end-to-end example utilizes the common set of interfaces provided by fms_mo
>- Users should provide a tokentized data file based on their need. This is just one example to demonstrate what data format`fms_mo`is expecting.
31
+
> - Users should provide a tokenized data file based on their need. This is just one example to demonstrate what data format `fms_mo` is expecting.
32
32
> - Tokenized data will be saved in `<path_to_save>_train` and `<path_to_save>_test`
33
33
> - If you have trouble downloading Llama family of models from Hugging Face ([LLama models require access](https://www.llama.com/docs/getting-the-models/hugging-face/)), you can use `ibm-granite/granite-8b-code` instead
34
34
@@ -68,7 +68,7 @@ The end-to-end example utilizes the common set of interfaces provided by fms_mo
68
68
torch.int32 672 3521.904640
69
69
```
70
70
71
-
4. Further to **evaluate the quantized model**'s performance on a selected task using `lm-eval` library, the command below will run evaluation on [`lambada_openai`](https://huggingface.co/datasets/EleutherAI/lambada_openai) task and show the perplexity/accuracy at the end.
71
+
4. **Evaluate the quantized model**'s performance on a selected task using `lm-eval` library, the command below will run evaluation on [`lambada_openai`](https://huggingface.co/datasets/EleutherAI/lambada_openai) task and show the perplexity/accuracy at the end.
72
72
73
73
```bash
74
74
lm_eval --model hf \
@@ -79,7 +79,7 @@ The end-to-end example utilizes the common set of interfaces provided by fms_mo
79
79
--batch_size auto
80
80
```
81
81
82
-
## Summary of results
82
+
## Example Test Results
83
83
84
84
- Unquantized Model
85
85
```bash
@@ -98,20 +98,20 @@ The end-to-end example utilizes the common set of interfaces provided by fms_mo
98
98
```
99
99
100
100
101
-
- Quantized model with`desc_act`set to True (could improve the model quality, but at the cost of inference speed.)
101
+
- Quantized model with `desc_act` set to `True` (could improve the model quality, but at the cost of inference speed.)
> There are some randomness in generating the model and data, the resulting accuracy may vary ~$\pm$ 0.05.
109
+
> There is some randomness in generating the model and data, the resulting accuracy may vary ~$\pm$ 0.05.
110
110
111
111
112
112
## Code Walkthrough
113
113
114
-
1. Command line arguments will be used to create a GPTQ quantization config. (Information about the required arguments and their default values can be found in [fms_mo/training_args.py](../../fms_mo/training_args.py) )
114
+
1. Command line arguments will be used to create a GPTQ quantization config. Information about the required arguments and their default values can be found [here](../../fms_mo/training_args.py)
115
115
116
116
```python
117
117
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
@@ -122,7 +122,7 @@ The end-to-end example utilizes the common set of interfaces provided by fms_mo
122
122
damp_percent=gptq_args.damp_percent)
123
123
```
124
124
125
-
2. Load the pre_trained model with`auto_gptq`class/wrapper. (tokenizeris optional because we already tokenized the data in a previous step.)
125
+
2. Load the pre_trained model with`auto_gptq`class/wrapper. Tokenizeris optional because we already tokenized the data in a previous step.
0 commit comments