Skip to content

Commit 10c00ea

Browse files
authored
Update README.MD
1 parent 52b6e4a commit 10c00ea

File tree

1 file changed

+40
-84
lines changed
  • AI-and-Analytics/Getting-Started-Samples/INC-Quantization-Sample-for-PyTorch

1 file changed

+40
-84
lines changed

AI-and-Analytics/Getting-Started-Samples/INC-Quantization-Sample-for-PyTorch/README.MD

Lines changed: 40 additions & 84 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The sample is a getting started tutorial for the Intel® Neural Compressor (INC), and demonstrates how to perform INT8 quantization on a Hugging Face BERT model. This sample shows how to achieve performance boosts using Intel hardware.
44

5-
| Property | Description
5+
| Area | Description
66
|:--- |:---
77
| What you will learn | How to quantize a BERT model using Intel® Neural Compressor
88
| Time to complete | 20 minutes
@@ -48,41 +48,14 @@ You will need to download and install the following toolkits, tools, and compone
4848
Required AI Tools: **Intel® Neural Compressor, Intel® Extension of PyTorch***.
4949
<br>If you have not already, select and install these Tools via via [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html). AI and Analytics samples are validated on AI Tools Offline Installer. It is recommended to select Offline Installer option in AI Tools Selector.
5050

51-
>**Note**: If Docker option is chosen in AI Tools Selector, refer to [Working with Preset Containers](https://github.com/intel/ai-containers/tree/main/preset) to learn how to run the docker and samples.
52-
53-
**2. (Offline Installer) Activate the AI Tools bundle base environment**
54-
55-
If the default path is used during the installation of AI Tools:
56-
```
57-
source $HOME/intel/oneapi/intelpython/bin/activate
58-
```
59-
If a non-default path is used:
60-
```
61-
source <custom_path>/bin/activate
51+
**2. Install dependencies**
6252
```
63-
64-
**3. (Offline Installer) Activate relevant Conda environment**
65-
```
66-
conda activate pytorch
67-
```
68-
69-
**4. Clone the GitHub repository**
70-
71-
```
72-
git clone https://github.com/oneapi-src/oneAPI-samples.git
73-
cd oneAPI-samples/AI-and-Analytics/Getting-Started-Samples/INC-Quantization-Sample-for-PyTorch
53+
pip install -r requirements.txt
7454
```
55+
**Install Jupyter Notebook** by running `pip install notebook`. Alternatively, see [Installing Jupyter](https://jupyter.org/install) for detailed installation instructions.
7556

76-
**5. Install dependencies**
57+
## Run the `Getting Started with Intel® Neural Compressor for Quantization` Sample
7758

78-
>**Note**: Before running the following commands, make sure your Conda/Python environment with AI Tools installed is activated
79-
```
80-
pip install -r requirements.txt
81-
pip install notebook
82-
```
83-
For Jupyter Notebook, refer to [Installing Jupyter](https://jupyter.org/install) for detailed installation instructions.
84-
85-
## Run the Sample
8659
>**Note**: Before running the sample, make sure [Environment Setup](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Getting-Started-Samples/INC-Quantization-Sample-for-PyTorch#environment-setup) is completed.
8760
8861
Go to the section which corresponds to the installation method chosen in [AI Tools Selector](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-tools-selector.html) to see relevant instructions:
@@ -91,91 +64,74 @@ Go to the section which corresponds to the installation method chosen in [AI Too
9164
* [Docker](#docker)
9265

9366
### AI Tools Offline Installer (Validated)
94-
95-
**1. Register Conda kernel to Jupyter Notebook kernel**
96-
97-
If the default path is used during the installation of AI Tools:
67+
1. If you have not already done so, activate the AI Tools bundle base environment.
68+
If you used the default location to install AI Tools, open a terminal and type the following
9869
```
99-
$HOME/intel/oneapi/intelpython/envs/pytorch/bin/python -m ipykernel install --user --name=pytorch
70+
source $HOME/intel/oneapi/intelpython/bin/activate
71+
```
72+
If you used a separate location, open a terminal and type the following
73+
```
74+
source <custom_path>/bin/activate
10075
```
101-
If a non-default path is used:
76+
2. Activate the Conda environment:
10277
```
103-
<custom_path>/bin/python -m ipykernel install --user --name=pytorch
78+
conda activate pytorch
79+
```
80+
3. Clone the GitHub repository:
81+
```
82+
git clone https://github.com/oneapi-src/oneAPI-samples.git
83+
cd oneAPI-samples/AI-and-Analytics/Getting-Started-Samples
10484
```
105-
**2. Launch Jupyter Notebook**
85+
4. Launch Jupyter Notebook:
86+
> **Note**: You might need to register Conda kernel to Jupyter Notebook kernel,
87+
feel free to check [the instruction](https://github.com/IntelAI/models/tree/master/docs/notebooks/perf_analysis#option-1-conda-environment-creation)
10688
```
10789
jupyter notebook --ip=0.0.0.0
10890
```
109-
**3. Follow the instructions to open the URL with the token in your browser**
110-
111-
**4. Select the Notebook**
91+
5. Follow the instructions to open the URL with the token in your browser.
92+
6. Select the Notebook:
11293
```
11394
quantize_with_inc.ipynb
11495
```
115-
**5. Change the kernel to `pytorch`**
116-
117-
**6. Run every cell in the Notebook in sequence**
96+
7. Change the kernel to `pytorch`
97+
8. Run every cell in the Notebook in sequence.
11898

11999
### Conda/PIP
120-
> **Note**: Before running the instructions below, make sure your Conda/Python environment with AI Tools installed is activated
121-
122-
**1. Register Conda/Python kernel to Jupyter Notebook kernel**
123-
124-
For Conda:
125-
```
126-
<CONDA_PATH_TO_ENV>/bin/python -m ipykernel install --user --name=<your-env-name>
127-
```
128-
To know <CONDA_PATH_TO_ENV>, run `conda env list` and find your Conda environment path.
129-
130-
For PIP:
131-
```
132-
python -m ipykernel install --user --name=<your-env-name>
100+
> **Note**: Make sure your Conda/Python environment with AI Tools installed is activated
101+
1. Clone the GitHub repository:
102+
```
103+
git clone https://github.com/oneapi-src/oneAPI-samples.git
104+
cd oneAPI-samples/AI-and-Analytics/Getting-Started-Samples
133105
```
134-
**2. Launch Jupyter Notebook**
135-
106+
2. Launch Jupyter Notebook:
107+
> **Note**: You might need to register Conda kernel to Jupyter Notebook kernel,
108+
feel free to check [the instruction](https://github.com/IntelAI/models/tree/master/docs/notebooks/perf_analysis#option-1-conda-environment-creation)
136109
```
137110
jupyter notebook --ip=0.0.0.0
138111
```
139-
**3. Follow the instructions to open the URL with the token in your browser**
140-
141-
**4. Select the Notebook**
142-
112+
4. Follow the instructions to open the URL with the token in your browser.
113+
5. Select the Notebook:
143114
```
144115
quantize_with_inc.ipynb
145116
```
146-
**5. Change the kernel to `<your-env-name>`**
147-
148-
**6. Run every cell in the Notebook in sequence**
117+
6. Run every cell in the Notebook in sequence.
149118

150119
### Docker
151120
AI Tools Docker images already have Get Started samples pre-installed. Refer to [Working with Preset Containers](https://github.com/intel/ai-containers/tree/main/preset) to learn how to run the docker and samples.
152-
121+
153122
## Example Output
154123
You should see an image showing the performance comparison and analysis between FP32 and INT8.
155124
>**Note**: The image shown below is an example of a general performance comparison for inference speedup obtained by quantization. (Your results might be different.)
156125
157126
![Performance Numbers](images/inc_speedup.png)
158-
159127
## Related Samples
160-
161128
* [Fine-tuning Text Classification Model with Intel® Neural Compressor (INC)](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Features-and-Functionality/INC_QuantizationAwareTraining_TextClassification)
162129
* [Optimize PyTorch* Models using Intel® Extension for PyTorch* (IPEX)](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Features-and-Functionality/INC_QuantizationAwareTraining_TextClassification)
163-
164130
## License
165131

166132
Code samples are licensed under the MIT license. See
167-
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt)
168-
for details.
133+
[License.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/License.txt) for details.
169134

170-
Third party program Licenses can be found here:
171-
[third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt)
135+
Third party program Licenses can be found here: [third-party-programs.txt](https://github.com/oneapi-src/oneAPI-samples/blob/master/third-party-programs.txt).
172136

173137
*Other names and brands may be claimed as the property of others. [Trademarks](https://www.intel.com/content/www/us/en/legal/trademarks.html)
174-
175-
176-
177-
178-
179-
180-
181-

0 commit comments

Comments
 (0)