Skip to content

Commit eea608b

Browse files
authored
New ITREX Code Sample (#2314)
* working int4/8 llm inference using ITREX * fixed typo and updated time to complete * updated plot to show size in GB
1 parent 97c9e69 commit eea608b

File tree

6 files changed

+1151
-0
lines changed

6 files changed

+1151
-0
lines changed
Lines changed: 223 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,223 @@
1+
# `Quantizing Transformer Model using Intel® Extension for Transformers (ITREX)` Sample
2+
3+
The `Quantizing Transformer Model using Intel® Extension for Transformers (ITREX)` sample illustrates the process of quantizing the `Intel/neural-chat-7b-v3-3` language model. This model, a fine-tuned iteration of *Mistral-7B*, undergoes quantization utilizing Weight Only Quantization (WOQ) techniques provided by Intel® Extension for Transformers (ITREX).
4+
5+
By leveraging WOQ techniques, developers can optimize the model's memory footprint and computational efficiency without sacrificing performance or accuracy. This sample serves as a practical demonstration of how ITREX empowers users to maximize the potential of transformer models in various applications, especially in resource-constrained environments.
6+
7+
| Area | Description
8+
|:--- |:---
9+
| What you will learn | How to quantize transformer models using Intel® Extension for Transformers (ITREX)
10+
| Time to complete | 20 minutes
11+
| Category | Concepts and Functionality
12+
13+
Intel® Extension for Transformers (ITREX) serves as a comprehensive toolkit tailored to enhance the performance of GenAI/LLM (General Artificial Intelligence/Large Language Models) workloads across diverse Intel platforms. Among its key features is the capability to seamlessly quantize transformer models to 4-bit or 8-bit integer precision.
14+
15+
This quantization functionality not only facilitates significant reduction in memory footprint but also offers developers the flexibility to fine-tune the quantization method. This customization empowers developers to mitigate accuracy loss, a crucial concern in low-precision inference scenarios. By striking a balance between memory efficiency and model accuracy, ITREX enables efficient deployment of transformer models in resource-constrained environments without compromising on performance or quality.
16+
17+
## Purpose
18+
19+
This sample demonstrates how to quantize a pre-trained language model, specifically the `Intel/neural-chat-7b-v3-3` model from Intel. Quantization enables more memory-efficient inference, significantly reducing the model's memory footprint.
20+
21+
Using the INT8 data format, which employs a quarter of the bit width of floating-point-32 (FP32), memory usage can be lowered by up to 75%. Additionally, execution time for arithmetic operations is reduced. The INT4 data type takes memory optimization even further, consuming 8 times less memory than FP32.
22+
23+
Quantization thus offers a compelling approach to deploying language models in resource-constrained environments, ensuring both efficient memory utilization and faster inference times.
24+
25+
## Prerequisites
26+
27+
| Optimized for | Description
28+
|:--- |:---
29+
| OS | Ubuntu* 22.04.3 LTS (or newer)
30+
| Hardware | Intel® Xeon® Scalable Processor family
31+
| Software | Intel® Extension for Transformers (ITREX)
32+
33+
### For Local Development Environments
34+
35+
You will need to download and install the following toolkits, tools, and components to use the sample.
36+
37+
- **Intel® AI Analytics Toolkit (AI Kit)**
38+
39+
You can get the AI Kit from [Intel® oneAPI Toolkits](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#analytics-kit). <br> See [*Get Started with the Intel® AI Analytics Toolkit for Linux*](https://www.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux) for AI Kit installation information and post-installation steps and scripts.
40+
41+
- **Jupyter Notebook**
42+
43+
Install using PIP: `$pip install notebook`. <br> Alternatively, see [*Installing Jupyter*](https://jupyter.org/install) for detailed installation instructions.
44+
45+
- **Additional Packages**
46+
47+
You will need to install the additional packages in requirements.txt.
48+
49+
```
50+
pip install -r requirements.txt
51+
```
52+
53+
### For Intel® DevCloud
54+
55+
The necessary tools and components are already installed in the environment. You do not need to install additional components. See *[Intel® DevCloud for oneAPI](https://DevCloud.intel.com/oneapi/get_started/)* for information.
56+
57+
## Key Implementation Details
58+
59+
This code sample showcases the implementation of quantization for memory-efficient text generation utilizing Intel® Extension for Transformers (ITREX).
60+
61+
The sample includes both a Jupyter Notebook and a Python Script. While the notebook serves as a tutorial for learning purposes, it's recommended to use the Python script in production setups for optimal performance.
62+
63+
### Jupyter Notebook
64+
65+
| Notebook | Description
66+
|:--- |:---
67+
|`quantize_transformer_models_with_itrex.ipynb` | This notebook provides detailed steps for performing INT4/INT8 quantization of transformer models using Intel® Extension for Transformers (ITREX). It's designed to aid understanding and experimentation.|
68+
69+
### Python Script
70+
71+
| Script | Description
72+
|:--- |:---
73+
|`quantize_transformer_models_with_itrex.py` | The Python script conducts INT4/INT8 quantization of transformer models leveraging Intel® Extension for Transformers (ITREX). It allows text generation based on an initial prompt, giving users the option to select either `INT4` or `INT8` quantization. |
74+
75+
These components offer flexibility for both learning and practical application, empowering users to harness the benefits of quantization for transformer models efficiently.
76+
77+
## Set Environment Variables
78+
79+
When working with the command-line interface (CLI), you should configure the oneAPI toolkits using environment variables. Set up your CLI environment by sourcing the `intelpython` environment's *activate* script every time you open a new terminal window. This practice ensures that your compiler, libraries, and tools are ready for development.
80+
81+
## Run the `Quantizing Transformer Model using Intel® Extension for Transformers (ITREX)` Sample
82+
83+
### On Linux*
84+
85+
> **Note**: If you have not already done so, set up your CLI
86+
> environment by sourcing the `intelpython` environment's *activate* script in the root of your oneAPI installation.
87+
>
88+
> Linux*:
89+
> - For POXIS shells, run: `source ${HOME}/intel/oneapi/intelpython/bin/activate`
90+
> - For non-POSIX shells, like csh, use the following command: `bash -c 'source ${HOME}/intel/oneapi/intelpython/bin/activate ; exec csh'`
91+
>
92+
> For more information on configuring environment variables, see *[Use the setvars Script with Linux* or macOS*](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/oneapi-development-environment-setup/use-the-setvars-script-with-linux-or-macos.html)*.
93+
94+
#### Activate Conda
95+
96+
1. Activate the Conda environment.
97+
```
98+
conda activate pytorch
99+
```
100+
2. Activate Conda environment without Root access (Optional).
101+
102+
By default, the AI Kit is installed in the `<home>/intel/oneapi/intelpython` folder and requires root privileges to manage it.
103+
104+
You can choose to activate Conda environment without root access. To bypass root access to manage your Conda environment, clone and activate your desired Conda environment using the following commands similar to the following.
105+
106+
```
107+
conda create --name user_pytorch --clone pytorch
108+
conda activate user_pytorch
109+
```
110+
111+
#### Installing Dependencies
112+
113+
1. Run the following command:
114+
```bash
115+
pip install -r requirements.txt
116+
```
117+
118+
This script will automatically install all the required dependencies.
119+
120+
121+
#### Using Jupyter Notebook
122+
123+
1. Navigate to the sample directory in your terminal.
124+
2. Launch Jupyter Notebook with the following command:
125+
```
126+
jupyter notebook --ip=0.0.0.0 --port 8888 --allow-root
127+
```
128+
3. Follow the instructions provided in the terminal to open the URL with the token in your web browser.
129+
4. Locate and select the notebook file named `quantize_transformer_models_with_itrex.ipynb`.
130+
5. Ensure that you change the Jupyter Notebook kernel to the corresponding environment.
131+
6. Run each cell in the notebook sequentially.
132+
133+
#### Running on the Command Line (for deployment)
134+
135+
1. Navigate to the sample directory in your terminal.
136+
2. Execute the script using the following command:
137+
```bash
138+
OMP_NUM_THREADS=<number of physical cores> numactl -m <node index> -C <CPU list> python quantize_transformer_models_with_itrex.py
139+
```
140+
141+
**Note:** You can use the command `numactl -H` to identify the number of nodes, node indices, and CPU lists. Additionally, the `lscpu` command provides information about the number of physical cores available.
142+
143+
For example, if you have 8 physical cores, 1 node (index 0), and want to use CPUs 0-3, you would run:
144+
```bash
145+
OMP_NUM_THREADS=4 numactl -m 0 -C 0-3 python quantize_transformer_models_with_itrex.py
146+
```
147+
148+
It is generally recommended to utilize all available physical cores and CPUs within a single node. Thus, you can simplify the command as follows:
149+
```bash
150+
OMP_NUM_THREADS=<number of physical cores> numactl -m 0 -C all python quantize_transformer_models_with_itrex.py
151+
```
152+
153+
Here are some examples demonstrating how to use the `quantize_transformer_models_with_itrex.py` script:
154+
155+
1. Quantize the model to INT4, and specify a maximum number of new tokens:
156+
```bash
157+
OMP_NUM_THREADS=<number of physical cores> numactl -m 0 -C all \
158+
python quantize_transformer_models_with_itrex.py \
159+
--model_name "Intel/neural-chat-7b-v3-1" \
160+
--quantize "int8" \
161+
--max_new_tokens 100
162+
```
163+
164+
2. Quantize the model to INT4, disable Neural Speed, and specify a custom prompt:
165+
```bash
166+
OMP_NUM_THREADS=<number of physical cores> numactl -m 0 -C all \
167+
python quantize_transformer_models_with_itrex.py \
168+
--model_name "Intel/neural-chat-7b-v3-1" \
169+
--no_neural_speed \
170+
--quantize "int4" \
171+
--prompt "Custom prompt text goes here" \
172+
--max_new_tokens 50
173+
```
174+
175+
3. Use a [Llama2-7B GGUF model](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF) from HuggingFace model hub . When using GGUF model, `tokenizer_name` and `model_name` arguments are required and Neural Speed is enable by default. **Note**: You will need to request access for Llama2 on HuggingFace to run the below command.
176+
```bash
177+
OMP_NUM_THREADS=<number of physical cores> numactl -m 0 -C all \
178+
python quantize_transformer_models_with_itrex.py \
179+
--model_name "TheBloke/Llama-2-7B-Chat-GGUF" \
180+
--model_file "llama-2-7b-chat.Q4_0.gguf"\
181+
--tokenizer_name "meta-llama/Llama-2-7b-chat-hf" \
182+
--prompt "Custom prompt text goes here" \
183+
--max_new_tokens 50
184+
```
185+
186+
### Additional Notes
187+
188+
- Ensure that you follow the provided instructions carefully to execute the project successfully.
189+
- Make sure to adjust the command parameters based on your specific system configuration and requirements.
190+
191+
192+
### Run the Sample on Intel® DevCloud (Optional)
193+
194+
1. If you do not already have an account, request an Intel® DevCloud account at [*Create an Intel® DevCloud Account*](https://intelsoftwaresites.secure.force.com/DevCloud/oneapi).
195+
2. On a Linux* system, open a terminal.
196+
3. SSH into Intel® DevCloud.
197+
```
198+
ssh DevCloud
199+
```
200+
> **Note**: You can find information about configuring your Linux system and connecting to Intel DevCloud at Intel® DevCloud for oneAPI [Get Started](https://DevCloud.intel.com/oneapi/get_started).
201+
202+
4. Follow the instructions to open the URL with the token in your browser.
203+
5. Locate and select the Notebook.
204+
```
205+
quantize_transformer_models_with_itrex.ipynb
206+
````
207+
6. Change your Jupyter Notebook kernel to corresponding environment.
208+
7. Run every cell in the Notebook in sequence.
209+
210+
### Troubleshooting
211+
212+
If you receive an error message, troubleshoot the problem using the **Diagnostics Utility for Intel® oneAPI Toolkits**. The diagnostic utility provides configuration and system checks to help find missing dependencies, permissions errors, and other issues. See the *[Diagnostics Utility for Intel® oneAPI Toolkits User Guide](https://www.intel.com/content/www/us/en/develop/documentation/diagnostic-utility-user-guide/top.html)* for more information on using the utility.
213+
214+
## Example Output
215+
216+
If successful, the sample displays `[CODE_SAMPLE_COMPLETED_SUCCESSFULLY]`. Additionally, the sample shows statistics for model size and memory comsumption, before and after quantization.
217+
218+
## License
219+
220+
Code samples are licensed under the MIT license. See
221+
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
222+
223+
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt).

0 commit comments

Comments
 (0)