Skip to content

Commit 7585497

Browse files
authored
fix format issue in installation guide (#2141)
* fix format issue in installation guide
1 parent 7d85b0e commit 7585497

File tree

4 files changed

+59
-35
lines changed

4 files changed

+59
-35
lines changed
Lines changed: 21 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,24 @@
11
Blogs & Publications
22
====================
33

4-
* [What is new in Intel Extension for PyTorch (PyTorch Conference 2022 Breakout Session)](https://www.youtube.com/watch?v=SE56wFXdvP4&t=1s)
5-
* [Accelerating PyTorch with Intel® Extension for PyTorch\*](https://medium.com/pytorch/accelerating-pytorch-with-intel-extension-for-pytorch-3aef51ea3722)
6-
* [Intel and Facebook Accelerate PyTorch Performance with 3rd Gen Intel® Xeon® Processors and Intel® Deep Learning Boost’s new BFloat16 capability](https://www.intel.com/content/www/us/en/artificial-intelligence/posts/intel-facebook-boost-bfloat16.html)
7-
* [Accelerate PyTorch with the extension and oneDNN using Intel BF16 Technology](https://medium.com/pytorch/accelerate-pytorch-with-ipex-and-onednn-using-intel-bf16-technology-dca5b8e6b58f)
8-
* *Note*: APIs mentioned in it are deprecated.
9-
* [Scaling up BERT-like model Inference on modern CPU - Part 1 by the launcher of the extension](https://huggingface.co/blog/bert-cpu-scaling-part-1)
10-
* [KT Optimizes Performance for Personalized Text-to-Speech](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/KT-Optimizes-Performance-for-Personalized-Text-to-Speech/post/1337757)
4+
* [What is New in Intel Extension for PyTorch, PyTorch Conference, Dec 2022](https://www.youtube.com/watch?v=SE56wFXdvP4&t=1s)
5+
* [Accelerating PyG on Intel CPUs, Dec 2022](https://www.pyg.org/ns-newsarticle-accelerating-pyg-on-intel-cpus)
6+
* [PyTorch Stable Diffusion Using Hugging Face and Intel Arc, Nov 2022](https://towardsdatascience.com/pytorch-stable-diffusion-using-hugging-face-and-intel-arc-77010e9eead6)
7+
* [Empowering PyTorch on Intel® Xeon® Scalable processors with Bfloat16, Aug 2022](https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/)
8+
* [Accelerating PyTorch Vision Models with Channels Last on CPU, Aug 2022](https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/)
9+
* [Accelerating PyTorch with Intel® Extension for PyTorch, May 2022](https://medium.com/pytorch/accelerating-pytorch-with-intel-extension-for-pytorch-3aef51ea3722)
10+
* [Grokking PyTorch Intel CPU performance from first principles, Apr 2022](https://pytorch.org/tutorials/intermediate/torchserve_with_ipex.html)
11+
* [Grokking PyTorch Intel CPU performance from first principles, Apr 2022](https://medium.com/pytorch/grokking-pytorch-intel-cpu-performance-from-first-principles-7e39694412db)
12+
* [KT Optimizes Performance for Personalized Text-to-Speech, Nov 2021](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/KT-Optimizes-Performance-for-Personalized-Text-to-Speech/post/1337757)
13+
* [Scaling up BERT-like model Inference on modern CPU - Part 1, Apr 2021](https://huggingface.co/blog/bert-cpu-scaling-part-1)
14+
* [Accelerating PyTorch distributed fine-tuning with Intel technologies, Nov 2021](https://huggingface.co/blog/accelerating-pytorch)
15+
* [Intel® Extensions for PyTorch, Feb 2021](https://pytorch.org/tutorials/recipes/recipes/intel_extension_for_pytorch.html)
16+
* [Optimizing DLRM by using PyTorch with oneCCL Backend, Feb 2021](https://pytorch.medium.com/optimizing-dlrm-by-using-pytorch-with-oneccl-backend-9f85b8ef6929)
17+
* [Accelerate PyTorch with IPEX and oneDNN using Intel BF16 Technology, Feb 2021](https://medium.com/pytorch/accelerate-pytorch-with-ipex-and-onednn-using-intel-bf16-technology-dca5b8e6b58f)
18+
*Note*: APIs mentioned in it are deprecated.
19+
* [Scaling up BERT-like model Inference on modern CPU - Part 1](https://huggingface.co/blog/bert-cpu-scaling-part-1)
20+
* [Intel and Facebook Accelerate PyTorch Performance with 3rd Gen Intel® Xeon® Processors and Intel® Deep Learning Boost’s new BFloat16 capability, Jun 2020](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-and-Facebook-Accelerate-PyTorch-Performance-with-3rd-Gen/post/1335659)
21+
* [OneAPI Dev Summit 2022](https://www.oneapi.io/event-sessions/accelerating-pytorch-deep-learning-models-on-intel-xpus-2-ai-hpc-2022/)
22+
* [Scaling Inference on CPUs with TorchServe, PyTorch Conference 2022](https://www.youtube.com/watch?v=066_Jd6cwZg)
23+
* [Grokking PyTorch Intel CPU Performance From First Principles, PyTorch Blog](https://pytorch.org/tutorials/intermediate/torchserve_with_ipex.html?highlight=grokking)
24+
* [Grokking PyTorch Intel CPU Performance From First Principles (Part 2), PyTorch Blog](https://pytorch.org/tutorials/intermediate/torchserve_with_ipex_2.html?highlight=grokking%20pytorch%20intel%20cpu%20performance%20from%20first%20principles%20part)

docs/tutorials/examples.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -652,7 +652,7 @@ modelJit = convert_jit(modelJit, True)
652652
modelJit(data)
653653
```
654654

655-
### `torch.xpu.optimize`
655+
### torch.xpu.optimize
656656

657657
`torch.xpu.optimize` is an alternative of `ipex.optimize` in Intel® Extension for PyTorch*, to provide identical usage for XPU device only. The motivation of adding this alias is to unify the coding style in user scripts base on torch.xpu modular. Refer to below example for usage.
658658

docs/tutorials/features.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ Intel® Extension for PyTorch* provides built-in quantization recipes to deliver
5353

5454
Check more detailed information for `INT8 Quantization [CPU] <features/int8_overview.md>`_ and `INT8 recipe tuning API guide (Experimental, *NEW feature in 1.13.0* on CPU) <features/int8_recipe_tuning_api.md>`_ on CPU side.
5555

56-
On Intel® GPUs, quantization usages follows PyTorch default quantization APIs.
56+
On Intel® GPUs, quantization usages follow PyTorch default quantization APIs. Check sample codes at `Examples <./examples.html#int8>`_ page.
5757

5858
.. toctree::
5959
:hidden:

docs/tutorials/installation.md

Lines changed: 36 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -8,36 +8,31 @@ Installation Guide
88
Verified Hardware Platforms:
99
- Intel® Data Center GPU Flex Series 170
1010
- Intel® Data Center GPU Max Series
11-
- Intel® Arc™ series GPUs (Experimental support)
11+
- Intel® Arc™ A-Series GPUs (Experimental support)
1212

1313
### Software Requirements
1414

15-
- Ubuntu 22.04 (64-bit)
16-
- Intel GPU Drivers
17-
- Intel® Data Center GPU Flex Series [Stable 540](https://dgpu-docs.intel.com/releases/stable_540_20221205.html)
18-
- Intel® Data Center GPU Max Series [Stable 540](https://dgpu-docs.intel.com/releases/stable_540_20221205.html)
19-
- Intel® Arc™ A-Series Graphics [Stable 540](https://dgpu-docs.intel.com/releases/stable_540_20221205.html)
15+
- OS & Intel GPU Drivers
16+
17+
|Hardware|OS|Driver|
18+
|-|-|-|
19+
|Intel® Data Center GPU Flex Series|Ubuntu 22.04, Red Hat 8.6|[Stable 540](https://dgpu-docs.intel.com/releases/stable_540_20221205.html)|
20+
|Intel® Data Center GPU Max Series|Red Hat 8.6, Sles 15sp3/sp4|[Stable 540](https://dgpu-docs.intel.com/releases/stable_540_20221205.html)|
21+
|Intel® Arc™ A-Series Graphics|Ubuntu 22.04|[Stable 540](https://dgpu-docs.intel.com/releases/stable_540_20221205.html)|
22+
|Intel® Arc™ A-Series Graphics|Windows 11 or Windows 10 21H2 (via WSL2)|[for Windows 11 or Windows 10 21H2](https://www.intel.com/content/www/us/en/download/726609/intel-arc-graphics-windows-dch-driver.html)|
23+
2024
- Intel® oneAPI Base Toolkit 2023.0
2125
- Python 3.7-3.10
2226
- Verified with GNU GCC 11
2327

24-
## PyTorch-Intel® Extension for PyTorch\* Version Mapping
25-
26-
Intel® Extension for PyTorch\* has to work with a corresponding version of PyTorch. Here are the PyTorch versions that we support and the mapping relationship:
27-
28-
|PyTorch Version|Extension Version|
29-
|--|--|
30-
|[v1.13.\*](https://github.com/pytorch/pytorch/tree/v1.13.0) (patches needed)|[v1.13.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v1.13.10+xpu)|
31-
|[v1.10.\*](https://github.com/pytorch/pytorch/tree/v1.10.0) (patches needed)|[v1.10.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v1.10.200+gpu)|
32-
3328
## Preparations
3429

3530
### Install Intel GPU Driver
3631

3732
|OS|Instructions for installing Intel GPU Driver|
38-
|-|-|-|
39-
|Ubuntu 22.04|Refer to the [Installation Guides](https://dgpu-docs.intel.com/installation-guides/ubuntu/ubuntu-jammy-arc.html) for the latest driver installation. When installing the verified [Stable 540](https://dgpu-docs.intel.com/releases/stable_540_20221205.html) driver, use a specific version for component package names, such as `sudo apt-get install intel-opencl-icd=22.43.24595.35`|
40-
|WSL2 Ubuntu 20.04 on Windows 11 or Windows 10 21H2|Please download drivers for Intel® Arc™ series [for Windows 11 or Windows 10 21H2](https://www.intel.com/content/www/us/en/download/726609/intel-arc-graphics-windows-dch-driver.html). Please note that you would have to follow the rest of the steps in WSL2, but the drivers should be installed on Windows|
33+
|-|-|
34+
|Linux\*|Refer to the [Installation Guides](https://dgpu-docs.intel.com/installation-guides/index.html) for the latest driver installation for individual Linux\* distributions. When installing the verified [Stable 540](https://dgpu-docs.intel.com/releases/stable_540_20221205.html) driver, use a specific version for component package names, such as `sudo apt-get install intel-opencl-icd=22.43.24595.35`|
35+
|Windows 11 or Windows 10 21H2 (via WSL2)|Please download drivers for Intel® Arc™ A-Series [for Windows 11 or Windows 10 21H2](https://www.intel.com/content/www/us/en/download/726609/intel-arc-graphics-windows-dch-driver.html). Please note that you would have to follow the rest of the steps in WSL2, but the drivers should be installed on Windows|
4136

4237
### Install oneAPI Base Toolkit
4338

@@ -55,6 +50,15 @@ Default installation location *{ONEAPI_ROOT}* is `/opt/intel/oneapi` for root ac
5550
source {ONEAPI_ROOT}/setvars.sh
5651
```
5752

53+
## PyTorch-Intel® Extension for PyTorch\* Version Mapping
54+
55+
Intel® Extension for PyTorch\* has to work with a corresponding version of PyTorch. Here are the PyTorch versions that we support and the mapping relationship:
56+
57+
|PyTorch Version|Extension Version|
58+
|--|--|
59+
|[v1.13.\*](https://github.com/pytorch/pytorch/tree/v1.13.0) (patches needed)|[v1.13.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v1.13.10+xpu)|
60+
|[v1.10.\*](https://github.com/pytorch/pytorch/tree/v1.10.0) (patches needed)|[v1.10.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v1.10.200+gpu)|
61+
5862
## Install via wheel files
5963

6064
Prebuilt wheel files availability matrix for Python versions:
@@ -64,18 +68,24 @@ Prebuilt wheel files availability matrix for Python versions:
6468
| 1.13.10+xpu | | ✔️ | ✔️ | ✔️ | ✔️ |
6569
| 1.10.200+gpu | ✔️ | ✔️ | ✔️ | ✔️ | |
6670

67-
### Install PyTorch and TorchVision for stock Python
71+
**Note:** Wheel files for Intel® Distribution for Python\* only supports Python 3.9.
72+
73+
**Note:** Wheel files supporting Intel® Distribution for Python\* starts from 1.13.
74+
75+
### Repositories for prebuilt wheel files
6876

69-
```bash
70-
python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 -f https://developer.intel.com/ipex-whl-stable-xpu
7177
```
78+
# Stock PyTorch
79+
REPO_URL: https://developer.intel.com/ipex-whl-stable-xpu
7280
73-
**Note:** Installation of TorchVision is optional.
81+
# Intel® Distribution for Python*
82+
REPO_URL: https://developer.intel.com/ipex-whl-stable-xpu-idp
83+
```
7484

75-
### Install PyTorch and TorchVision for Intel® Distribution for Python\*
85+
### Install PyTorch and TorchVision
7686

7787
```bash
78-
python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 -f https://developer.intel.com/ipex-whl-stable-xpu-idp
88+
python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 -f <REPO_URL>
7989
```
8090

8191
**Note:** Installation of TorchVision is optional.
@@ -89,7 +99,7 @@ Intel® Extension for PyTorch\* doesn't depend on torchaudio. If you need TorchA
8999
### Install Intel® Extension for PyTorch\*
90100

91101
```bash
92-
python -m pip install intel_extension_for_pytorch==1.13.10+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
102+
python -m pip install intel_extension_for_pytorch==1.13.10+xpu -f <REPO_URL>
93103
```
94104

95105
## Install via compiling from source
@@ -140,5 +150,5 @@ $ pip install dist/*.whl
140150

141151
|Issue|Explanation|
142152
|-|-|
143-
|Building from source for Intel® Arc™ series GPUs failed on WSL2 without any error thrown|Your system probably does not have enough RAM, so Linux kernel's Out-of-memory killer got invoked. You can verify it by running `dmesg` on bash (WSL2 terminal). If the OOM killer had indeed killed the build process, then you can try increasing the swap-size of WSL2, and/or decreasing the number of parallel build jobs with the environment variable `MAX_JOBS` (by default, it's equal to the number of logical CPU cores. So, setting `MAX_JOBS` to 1 is a very conservative approach, which would slow things down a lot).|
153+
|Building from source for Intel® Arc™ A-Series GPUs failed on WSL2 without any error thrown|Your system probably does not have enough RAM, so Linux kernel's Out-of-memory killer got invoked. You can verify it by running `dmesg` on bash (WSL2 terminal). If the OOM killer had indeed killed the build process, then you can try increasing the swap-size of WSL2, and/or decreasing the number of parallel build jobs with the environment variable `MAX_JOBS` (by default, it's equal to the number of logical CPU cores. So, setting `MAX_JOBS` to 1 is a very conservative approach, which would slow things down a lot).|
144154
|On WSL2, some workloads terminate with an error `CL_DEVICE_NOT_FOUND` after some time | This is due to the [TDR feature](https://learn.microsoft.com/en-us/windows-hardware/drivers/display/tdr-registry-keys#tdrdelay) in Windows. You can try increasing TDRDelay in your Windows Registry to a large value, such as 20 (it is 2 seconds, by default), and reboot.|

0 commit comments

Comments
 (0)