Skip to content

Commit 8bf15a2

Browse files
authored
update README to include XPU support. add numpy as installation dependency. (#1271)
fine tune fine tune
1 parent 7f89975 commit 8bf15a2

File tree

2 files changed

+47
-8
lines changed

2 files changed

+47
-8
lines changed

README.md

Lines changed: 46 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,32 +1,68 @@
11
# Intel® Extension for PyTorch\*
22

3-
Intel® Extension for PyTorch\* extends PyTorch with optimizations for extra performance boost on Intel hardware. Most of the optimizations will be included in stock PyTorch releases eventually, and the intention of the extension is to deliver up-to-date features and optimizations for PyTorch on Intel hardware, examples include AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX).
3+
Intel® Extension for PyTorch\* extends PyTorch with up-to-date features optimizations for an extra performance boost on Intel hardware. Example optimizations use AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX). Over time, most of these optimizations will be included directly into stock PyTorch releases. More importantly, Intel® Extension for PyTorch\* provides easy GPU acceleration for Intel® discrete graphics cards with PyTorch\*.
44

5-
Intel® Extension for PyTorch\* is loaded as a Python module for Python programs or linked as a C++ library for C++ programs. Users can enable it dynamically in script by importing `intel_extension_for_pytorch`. It covers optimizations for both imperative mode and graph mode. Optimized operators and kernels are registered through PyTorch dispatching mechanism. These operators and kernels are accelerated from native vectorization feature and matrix calculation feature of Intel hardware. During execution, Intel® Extension for PyTorch\* intercepts invocation of ATen operators, and replace the original ones with these optimized ones. In graph mode, further operator fusions are applied manually by Intel engineers or through a tool named *oneDNN Graph* to reduce operator/kernel invocation overheads, and thus increase performance.
5+
Intel® Extension for PyTorch\* provides optimizations for both eager mode and graph mode, however, compared to eager mode, graph mode in PyTorch normally yields better performance from optimization techniques such as operation fusion, and Intel® Extension for PyTorch\* amplified them with more comprehensive graph optimizations. Therefore we recommended you to take advantage of Intel® Extension for PyTorch\* with [TorchScript](https://pytorch.org/docs/stable/jit.html) whenever your workload supports it. You could choose to run with `torch.jit.trace()` function or `torch.jit.script()` function, but based on our evaluation, `torch.jit.trace()` supports more workloads so we recommend you to use `torch.jit.trace()` as your first choice. On Intel® graphics cards, through registering feature implementations into PyTorch\* as torch.xpu, PyTorch\* scripts work on Intel® discrete graphics cards.
66

7-
More detailed tutorials are available at [**Intel® Extension for PyTorch\* online document website**](https://intel.github.io/intel-extension-for-pytorch/).
7+
The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts users can enable it dynamically by importing `intel_extension_for_pytorch`.
8+
9+
More detailed tutorials are available at **Intel® Extension for PyTorch\* online document website**. Both [CPU version](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/) and [XPU/GPU version](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/) are available.
810

911
## Installation
1012

11-
You can use either of the following 2 commands to install Intel® Extension for PyTorch\*.
13+
### CPU version
14+
15+
You can use either of the following 2 commands to install Intel® Extension for PyTorch\* CPU version.
1216

1317
```python
1418
python -m pip install intel_extension_for_pytorch
1519
```
1620

1721
```python
18-
python -m pip install intel_extension_for_pytorch -f https://software.intel.com/ipex-whl-stable
22+
python -m pip install intel_extension_for_pytorch -f https://software.intel.com/ipex-whl-stable-cpu
1923
```
2024

2125
**Note:** Intel® Extension for PyTorch\* has PyTorch version requirement. Please check more detailed information via the URL below.
2226

23-
More installation methods can be found at [Installation Guide](https://intel.github.io/intel-extension-for-pytorch/latest/tutorials/installation.html)
27+
More installation methods can be found at [CPU Installation Guide](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html)
28+
29+
### XPU/GPU version
30+
31+
You can install Intel® Extension for PyTorch\* for XPU/GPU via command below.
32+
33+
```python
34+
python -m pip install torch==1.10.0a0 -f https://developer.intel.com/ipex-whl-stable-xpu
35+
python -m pip install intel_extension_for_pytorch==1.10.200+gpu -f https://software.intel.com/ipex-whl-stable-xpu
36+
```
37+
38+
**Note:** The patched PyTorch 1.10.0a0 is required to work with Intel® Extension for PyTorch\* on Intel® graphics card for now.
39+
40+
More installation methods can be found at [XPU/GPU Installation Guide](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/installation.html)
2441

2542
## Getting Started
2643

2744
Minor code changes are required for users to get start with Intel® Extension for PyTorch\*. Both PyTorch imperative mode and TorchScript mode are supported. You just need to import Intel® Extension for PyTorch\* package and apply its optimize function against the model object. If it is a training workload, the optimize function also needs to be applied against the optimizer object.
2845

29-
The following code snippet shows an inference code with FP32 data type. More examples, including training and C++ examples, are available at [Example page](https://intel.github.io/intel-extension-for-pytorch/latest/tutorials/examples.html).
46+
The following code snippet shows an inference code with FP32 data type. More examples on CPU, including training and C++ examples, are available at [CPU Example page](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/examples.html). More examples on XPU/GPU are available at [XPU/GPU Example page](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/examples.html).
47+
48+
### Inference on CPU
49+
50+
```python
51+
import torch
52+
import torchvision.models as models
53+
54+
model = models.resnet50(pretrained=True)
55+
model.eval()
56+
data = torch.rand(1, 3, 224, 224)
57+
58+
import intel_extension_for_pytorch as ipex
59+
model = ipex.optimize(model)
60+
61+
with torch.no_grad():
62+
model(data)
63+
```
64+
65+
### Inference on GPU
3066

3167
```python
3268
import torch
@@ -37,6 +73,8 @@ model.eval()
3773
data = torch.rand(1, 3, 224, 224)
3874

3975
import intel_extension_for_pytorch as ipex
76+
model.to('xpu')
77+
data.to('xpu')
4078
model = ipex.optimize(model)
4179

4280
with torch.no_grad():
@@ -45,7 +83,7 @@ with torch.no_grad():
4583

4684
## Model Zoo
4785

48-
Use cases that had already been optimized by Intel engineers are available at [Model Zoo for Intel® Architecture](https://github.com/IntelAI/models/tree/pytorch-r1.12-models). A bunch of PyTorch use cases for benchmarking are also available on the [Github page](https://github.com/IntelAI/models/tree/pytorch-r1.12-models/benchmarks#pytorch-use-cases). You can get performance benefits out-of-box by simply running scipts in the Model Zoo.
86+
Use cases that had already been optimized by Intel engineers are available at [Model Zoo for Intel® Architecture](https://github.com/IntelAI/models/tree/pytorch-r1.13-models). A bunch of PyTorch use cases for benchmarking are also available on the [Github page](https://github.com/IntelAI/models/tree/pytorch-r1.13-models/benchmarks#pytorch-use-cases). You can get performance benefits out-of-box by simply running scipts in the Model Zoo.
4987

5088
## License
5189

setup.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -200,6 +200,7 @@ def _install_requirements():
200200
def _build_installation_dependency():
201201
install_requires = []
202202
install_requires.append('psutil')
203+
install_requires.append('numpy')
203204
return install_requires
204205

205206
# Disable PyTorch wheel binding temporarily

0 commit comments

Comments
 (0)