Skip to content

Commit 09dd971

Browse files
authored
Update README.MD (#2337)
* Update README.MD * Update README.MD according to latest template * Update README.MD
1 parent 84cb34f commit 09dd971

File tree

1 file changed

+87
-43
lines changed
  • AI-and-Analytics/Getting-Started-Samples/INC-Quantization-Sample-for-PyTorch

1 file changed

+87
-43
lines changed

AI-and-Analytics/Getting-Started-Samples/INC-Quantization-Sample-for-PyTorch/README.MD

Lines changed: 87 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The sample is a getting started tutorial for the Intel® Neural Compressor (INC), and demonstrates how to perform INT8 quantization on a Hugging Face BERT model. This sample shows how to achieve performance boosts using Intel hardware.
44

5-
| Area | Description
5+
| Property | Description
66
|:--- |:---
77
| What you will learn | How to quantize a BERT model using Intel® Neural Compressor
88
| Time to complete | 20 minutes
@@ -39,7 +39,7 @@ The sample contains one Jupyter Notebook and one Python script. It can be run us
3939
|:--- |:---
4040
|`dataset.py` | The script provides a PyTorch* Dataset class that tokenizes text data
4141

42-
### Setup your environment for the offline installer
42+
### Environment Setup
4343

4444
You will need to download and install the following toolkits, tools, and components to use the sample.
4545

@@ -48,14 +48,41 @@ You will need to download and install the following toolkits, tools, and compone
4848
Required AI Tools: **Intel® Neural Compressor, Intel® Extension of PyTorch***.
4949
<br>If you have not already, select and install these Tools via via [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html). AI and Analytics samples are validated on AI Tools Offline Installer. It is recommended to select Offline Installer option in AI Tools Selector.
5050

51-
**2. Install dependencies**
51+
>**Note**: If Docker option is chosen in AI Tools Selector, refer to [Working with Preset Containers](https://github.com/intel/ai-containers/tree/main/preset) to learn how to run the docker and samples.
52+
53+
**2. (Offline Installer) Activate the AI Tools bundle base environment**
54+
55+
If the default path is used during the installation of AI Tools:
5256
```
53-
pip install -r requirements.txt
57+
source $HOME/intel/oneapi/intelpython/bin/activate
58+
```
59+
If a non-default path is used:
60+
```
61+
source <custom_path>/bin/activate
62+
```
63+
64+
**3. (Offline Installer) Activate relevant Conda environment**
65+
```
66+
conda activate pytorch
67+
```
68+
69+
**4. Clone the GitHub repository**
70+
71+
```
72+
git clone https://github.com/oneapi-src/oneAPI-samples.git
73+
cd oneAPI-samples/AI-and-Analytics/Getting-Started-Samples/INC-Quantization-Sample-for-PyTorch
5474
```
55-
**Install Jupyter Notebook** by running `pip install notebook`. Alternatively, see [Installing Jupyter](https://jupyter.org/install) for detailed installation instructions.
5675

57-
## Run the `Getting Started with Intel® Neural Compressor for Quantization` Sample
76+
**5. Install dependencies**
5877

78+
>**Note**: Before running the following commands, make sure your Conda/Python environment with AI Tools installed is activated
79+
```
80+
pip install -r requirements.txt
81+
pip install notebook
82+
```
83+
For Jupyter Notebook, refer to [Installing Jupyter](https://jupyter.org/install) for detailed installation instructions.
84+
85+
## Run the Sample
5986
>**Note**: Before running the sample, make sure [Environment Setup](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Getting-Started-Samples/INC-Quantization-Sample-for-PyTorch#environment-setup) is completed.
6087
6188
Go to the section which corresponds to the installation method chosen in [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html) to see relevant instructions:
@@ -64,74 +91,91 @@ Go to the section which corresponds to the installation method chosen in [AI Too
6491
* [Docker](#docker)
6592

6693
### AI Tools Offline Installer (Validated)
67-
1. If you have not already done so, activate the AI Tools bundle base environment.
68-
If you used the default location to install AI Tools, open a terminal and type the following
69-
```
70-
source $HOME/intel/oneapi/intelpython/bin/activate
71-
```
72-
If you used a separate location, open a terminal and type the following
94+
95+
**1. Register Conda kernel to Jupyter Notebook kernel**
96+
97+
If the default path is used during the installation of AI Tools:
7398
```
74-
source <custom_path>/bin/activate
99+
$HOME/intel/oneapi/intelpython/envs/pytorch/bin/python -m ipykernel install --user --name=pytorch
75100
```
76-
2. Activate the Conda environment:
101+
If a non-default path is used:
77102
```
78-
conda activate pytorch
79-
```
80-
3. Clone the GitHub repository:
81-
```
82-
git clone https://github.com/oneapi-src/oneAPI-samples.git
83-
cd oneAPI-samples/AI-and-Analytics/Getting-Started-Samples
103+
<custom_path>/bin/python -m ipykernel install --user --name=pytorch
84104
```
85-
4. Launch Jupyter Notebook:
86-
> **Note**: You might need to register Conda kernel to Jupyter Notebook kernel,
87-
feel free to check [the instruction](https://github.com/IntelAI/models/tree/master/docs/notebooks/perf_analysis#option-1-conda-environment-creation)
105+
**2. Launch Jupyter Notebook**
88106
```
89107
jupyter notebook --ip=0.0.0.0
90108
```
91-
5. Follow the instructions to open the URL with the token in your browser.
92-
6. Select the Notebook:
109+
**3. Follow the instructions to open the URL with the token in your browser**
110+
111+
**4. Select the Notebook**
93112
```
94-
optimize_pytorch_models_with_ipex.ipynb
113+
quantize_with_inc.ipynb
95114
```
96-
7. Change the kernel to `pytorch`
97-
8. Run every cell in the Notebook in sequence.
115+
**5. Change the kernel to `pytorch`**
116+
117+
**6. Run every cell in the Notebook in sequence**
98118

99119
### Conda/PIP
100-
> **Note**: Make sure your Conda/Python environment with AI Tools installed is activated
101-
1. Clone the GitHub repository:
102-
```
103-
git clone https://github.com/oneapi-src/oneAPI-samples.git
104-
cd oneapi-samples/AI-and-Analytics/Getting-Started-Samples
120+
> **Note**: Before running the instructions below, make sure your Conda/Python environment with AI Tools installed is activated
121+
122+
**1. Register Conda/Python kernel to Jupyter Notebook kernel**
123+
124+
For Conda:
125+
```
126+
<CONDA_PATH_TO_ENV>/bin/python -m ipykernel install --user --name=<your-env-name>
105127
```
106-
2. Launch Jupyter Notebook:
107-
> **Note**: You might need to register Conda kernel to Jupyter Notebook kernel,
108-
feel free to check [the instruction](https://github.com/IntelAI/models/tree/master/docs/notebooks/perf_analysis#option-1-conda-environment-creation)
128+
To know <CONDA_PATH_TO_ENV>, run `conda env list` and find your Conda environment path.
129+
130+
For PIP:
131+
```
132+
python -m ipykernel install --user --name=<your-env-name>
133+
```
134+
**2. Launch Jupyter Notebook**
135+
109136
```
110137
jupyter notebook --ip=0.0.0.0
111138
```
112-
4. Follow the instructions to open the URL with the token in your browser.
113-
5. Select the Notebook:
139+
**3. Follow the instructions to open the URL with the token in your browser**
140+
141+
**4. Select the Notebook**
142+
114143
```
115-
optimize_pytorch_models_with_ipex.ipynb
144+
quantize_with_inc.ipynb
116145
```
117-
6. Run every cell in the Notebook in sequence.
146+
**5. Change the kernel to `<your-env-name>`**
147+
148+
**6. Run every cell in the Notebook in sequence**
118149

119150
### Docker
120151
AI Tools Docker images already have Get Started samples pre-installed. Refer to [Working with Preset Containers](https://github.com/intel/ai-containers/tree/main/preset) to learn how to run the docker and samples.
121-
152+
122153
## Example Output
123154
You should see an image showing the performance comparison and analysis between FP32 and INT8.
124155
>**Note**: The image shown below is an example of a general performance comparison for inference speedup obtained by quantization. (Your results might be different.)
125156
126157
![Performance Numbers](images/inc_speedup.png)
158+
127159
## Related Samples
160+
128161
* [Fine-tuning Text Classification Model with Intel® Neural Compressor (INC)](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Features-and-Functionality/INC_QuantizationAwareTraining_TextClassification)
129162
* [Optimize PyTorch* Models using Intel® Extension for PyTorch* (IPEX)](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Features-and-Functionality/INC_QuantizationAwareTraining_TextClassification)
163+
130164
## License
131165

132166
Code samples are licensed under the MIT license. See
133-
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
167+
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt)
168+
for details.
134169

135-
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt).
170+
Third party program Licenses can be found here:
171+
[third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt)
136172

137173
*Other names and brands may be claimed as the property of others. [Trademarks](https://www.intel.com/content/www/us/en/legal/trademarks.html)
174+
175+
176+
177+
178+
179+
180+
181+

0 commit comments

Comments
 (0)