Skip to content

Commit 3ab86c2

Browse files
authored
2.1.30 docs update (#2849)
1 parent 819b0cb commit 3ab86c2

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+101
-178
lines changed

xpu/2.1.30+xpu/_sources/tutorials/contribution.md.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Once you implement and test your feature or bug-fix, submit a Pull Request to ht
1616

1717
## Developing Intel® Extension for PyTorch\* on XPU
1818

19-
A full set of instructions on installing Intel® Extension for PyTorch\* from source is in the [Installation document](../../../index.html#installation?platform=gpu&version=v2.1.30%2Bxpu).
19+
A full set of instructions on installing Intel® Extension for PyTorch\* from source is in the [Installation document](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu&version=v2.1.30%2bxpu).
2020

2121
To develop on your machine, here are some tips:
2222

xpu/2.1.30+xpu/_sources/tutorials/features/DDP.md.txt

Lines changed: 15 additions & 58 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ DistributedDataParallel (DDP)
33

44
## Introduction
55

6-
`DistributedDataParallel (DDP)` is a PyTorch\* module that implements multi-process data parallelism across multiple GPUs and machines. With DDP, the model is replicated on every process, and each model replica is fed a different set of input data samples. DDP enables overlapping between gradient communication and gradient computations to speed up training. Please refer to [DDP Tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) for an introduction to DDP.
6+
`DistributedDataParallel (DDP)` is a PyTorch\* module that implements multi-process data parallelism across multiple GPUs and machines. With DDP, the model is replicated on every process, and each model replica is fed a different set of input data samples. Please refer to [DDP Tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) for an introduction to DDP.
77

88
The PyTorch `Collective Communication (c10d)` library supports communication across processes. To run DDP on GPU, we use Intel® oneCCL Bindings for Pytorch\* (formerly known as torch-ccl) to implement the PyTorch c10d ProcessGroup API (https://github.com/intel/torch-ccl). It holds PyTorch bindings maintained by Intel for the Intel® oneAPI Collective Communications Library\* (oneCCL), a library for efficient distributed deep learning training implementing such collectives as `allreduce`, `allgather`, and `alltoall`. Refer to [oneCCL Github page](https://github.com/oneapi-src/oneCCL) for more information about oneCCL.
99

@@ -14,63 +14,25 @@ To use PyTorch DDP on GPU, install Intel® oneCCL Bindings for Pytorch\* as desc
1414
### Install PyTorch and Intel® Extension for PyTorch\*
1515

1616
Make sure you have installed PyTorch and Intel® Extension for PyTorch\* successfully.
17-
For more detailed information, check [installation guide](../../../../index.html#installation).
17+
For more detailed information, check [Installation Guide](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu).
1818

1919
### Install Intel® oneCCL Bindings for Pytorch\*
2020

21-
#### Install from source:
21+
#### [Recommended] Install from prebuilt wheels
2222

23-
Installation for CPU:
23+
1. Install oneCCL package:
2424

25-
```bash
26-
git clone https://github.com/intel/torch-ccl.git -b v2.1.0+cpu
27-
cd torch-ccl
28-
git submodule sync
29-
git submodule update --init --recursive
30-
python setup.py install
31-
```
32-
33-
Installation for GPU:
34-
35-
- Clone the `oneccl_bindings_for_pytorch`
36-
37-
```bash
38-
git clone https://github.com/intel/torch-ccl.git -b v2.1.300+xpu
39-
cd torch-ccl
40-
git submodule sync
41-
git submodule update --init --recursive
42-
```
43-
44-
- Install `oneccl_bindings_for_pytorch`
45-
46-
Option 1: build with oneCCL from third party
47-
48-
```bash
49-
COMPUTE_BACKEND=dpcpp python setup.py install
50-
```
51-
52-
Option 2: build without oneCCL and use oneCCL in system (Recommend)
53-
54-
We recommend to use apt/yum/dnf to install the oneCCL package. Refer to [Base Toolkit Installation](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html) for adding the APT/YUM/DNF key and sources for first-time users.
25+
We recommend using apt/yum/dnf to install the oneCCL package. Refer to [Base Toolkit Installation](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html) for adding the APT/YUM/DNF key and sources for first-time users.
5526

5627
Reference commands:
5728

5829
```bash
59-
sudo apt install intel-oneapi-ccl-devel=2021.11.1-6
60-
sudo yum install intel-oneapi-ccl-devel=2021.11.1-6
61-
sudo dnf install intel-oneapi-ccl-devel=2021.11.1-6
30+
sudo apt install intel-oneapi-ccl-devel=2021.12.0-309
31+
sudo yum install intel-oneapi-ccl-devel=2021.12.0-309
32+
sudo dnf install intel-oneapi-ccl-devel=2021.12.0-309
6233
```
6334

64-
Compile with commands below.
65-
66-
```bash
67-
export INTELONEAPIROOT=/opt/intel/oneapi
68-
USE_SYSTEM_ONECCL=ON COMPUTE_BACKEND=dpcpp python setup.py install
69-
```
70-
71-
#### Install from prebuilt wheel:
72-
73-
Prebuilt wheel files for CPU, GPU with generic Python\* and GPU with Intel® Distribution for Python\* are released in separate repositories.
35+
2. Install `oneccl_bindings_for_pytorch`
7436

7537
```
7638
# Generic Python* for CPU
@@ -85,25 +47,19 @@ Installation from either repository shares the command below. Replace the place
8547
python -m pip install oneccl_bind_pt --extra-index-url <REPO_URL>
8648
```
8749

88-
### Runtime Dynamic Linking
8950

90-
- If torch-ccl is built with oneCCL from third party or installed from prebuilt wheel:
91-
Dynamic link oneCCL and Intel MPI libraries:
51+
#### Install from source
9252

93-
```bash
94-
source $(python -c "import oneccl_bindings_for_pytorch as torch_ccl;print(torch_ccl.cwd)")/env/setvars.sh
95-
```
53+
Refer to [Installation Guide](https://github.com/intel/torch-ccl/tree/ccl_torch2.1.300+xpu?tab=readme-ov-file#install-from-source) to install Intel® oneCCL Bindings for Pytorch\* from source.
9654

97-
Dynamic link oneCCL only (not including Intel MPI):
55+
### Runtime Dynamic Linking
9856

99-
```bash
100-
source $(python -c "import oneccl_bindings_for_pytorch as torch_ccl;print(torch_ccl.cwd)")/env/vars.sh
101-
```
10257

103-
- If torch-ccl is built without oneCCL and use oneCCL in system, dynamic link oneCCl from oneAPI basekit:
58+
- dynamic link oneCCl from oneAPI basekit:
10459

10560
```bash
10661
source <ONEAPI_ROOT>/ccl/latest/env/vars.sh
62+
source <ONEAPI_ROOT>/mpi/latest/env/vars.sh
10763
```
10864

10965
Note: Make sure you have installed [basekit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#base-kit) when using Intel® oneCCL Bindings for Pytorch\* on Intel® GPUs. If the basekit is installed with a package manager, <ONEAPI_ROOT> is `/opt/intel/oneapi`.
@@ -148,6 +104,7 @@ Dynamic link oneCCL and Intel MPI libraries:
148104
source $(python -c "import oneccl_bindings_for_pytorch as torch_ccl;print(torch_ccl.cwd)")/env/setvars.sh
149105
# Or
150106
source <ONEAPI_ROOT>/ccl/latest/env/vars.sh
107+
source <ONEAPI_ROOT>/mpi/latest/env/vars.sh
151108
```
152109

153110
`Example_DDP.py`

xpu/2.1.30+xpu/_sources/tutorials/features/ipex_log.md.txt

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -37,8 +37,8 @@ All the usage are defined in `utils/LogUtils.h`. Currently Intel® Extension for
3737
You can use `IPEX_XXX_LOG`, XXX represents the log level as mentioned above. There are four parameters defined for simple log:
3838
- Log component, representing which part of Intel® Extension for PyTorch\* does this log belong to.
3939
- Log sub component, input an empty string("") for general usages. For `SYNGRAPH` you can add any log sub componment.
40-
- Log message template format string.
41-
- Log name.
40+
- Log message template format string, same as fmt_string in lib fmt, `{}` is used as a place holder for format args .
41+
- Log args for template format string, args numbers should be aligned with size of `{}`s.
4242

4343
Below is an example for using simple log inside abs kernel:
4444

@@ -48,14 +48,14 @@ IPEX_INFO_LOG("OPS", "", "Add a log for inside ops {}", "abs");
4848

4949
```
5050
### Event Log
51-
Event log is used for recording a whole event, such as an operator calculation. The whole event is identified by an unique `event_id`. You can also mark each step by using `step_id`. Use `IPEX_XXX_EVENT_END()` to complete the logging of the whole event.
51+
Event log is used for recording a whole event, such as an operator calculation. The whole event is identified by an unique `event_id`. You can also mark each step by using `step_id`. Use `IPEX_XXX_EVENT_END()` to complete the logging of the whole event. `XXX` represents the log level mentioned above. It will be used as the log level for all logs within one single log event.
5252

5353
Below is an example for using event log:
5454

5555
```c++
56-
IPEX_EVENT_END("OPS", "", "record_avg_pool", "start", "Here record the time start with arg:{}", arg);
56+
IPEX_EVENT_LOG("OPS", "", "record_avg_pool", "start", "Here record the time start with arg:{}", arg);
5757
prepare_data();
58-
IPEX_EVENT_END("OPS", "", "record_avg_pool", "data_prepare_finish", "Here record the data_prepare_finish with arg:{}", arg);
58+
IPEX_EVENT_LOG("OPS", "", "record_avg_pool", "data_prepare_finish", "Here record the data_prepare_finish with arg:{}", arg);
5959
avg_pool();
6060
IPEX_INFO_EVENT_END("OPS", "", "record_avg_pool", "finish conv", "Here record the end");
6161
```

xpu/2.1.30+xpu/_sources/tutorials/features/torch_compile_gpu.md.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Intel® Extension for PyTorch\* now empowers users to seamlessly harness graph c
1414
- `intel_extension_for_pytorch` : > v2.1.10
1515
- `triton` : [v2.1.0](https://github.com/intel/intel-xpu-backend-for-triton/releases/tag/v2.1.0) with Intel® XPU Backend for Triton* backend enabled.
1616

17-
Follow [Intel® Extension for PyTorch\* Installation](https://intel.github.io/intel-extension-for-pytorch/xpu/2.1.30+xpu/tutorials/installation.html) to install `torch` and `intel_extension_for_pytorch` firstly.
17+
Follow [Intel® Extension for PyTorch\* Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu&version=v2.1.30%2bxpu) to install `torch` and `intel_extension_for_pytorch` firstly.
1818

1919
Then install [Intel® XPU Backend for Triton\* backend](https://github.com/intel/intel-xpu-backend-for-triton) for `triton` package. You may install it via prebuilt wheel package or build it from the source. We recommend installing via prebuilt package:
2020

xpu/2.1.30+xpu/_sources/tutorials/getting_started.md.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Quick Start
22

3-
The following instructions assume you have installed the Intel® Extension for PyTorch\*. For installation instructions, refer to [Installation](../../../index.html#installation?platform=gpu&version=v2.1.30%2Bxpu).
3+
The following instructions assume you have installed the Intel® Extension for PyTorch\*. For installation instructions, refer to [Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu&version=v2.1.30%2bxpu).
44

55
To start using the Intel® Extension for PyTorch\* in your code, you need to make the following changes:
66

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
Installation
22
============
33

4-
Select your preferences and follow the installation instructions provided on the `Installation page <../../../index.html#installation?platform=gpu&version=v2.1.30%2Bxpu>`_.
4+
Select your preferences and follow the installation instructions provided on the `Installation page <https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu&version=v2.1.30%2bxpu>`_.
55

66
After successful installation, refer to the `Quick Start <getting_started.md>`_ and `Examples <examples.md>`_ sections to start using the extension in your code.
77

xpu/2.1.30+xpu/_sources/tutorials/introduction.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ For the detailed list of supported features and usage instructions, refer to `Fe
99

1010
Get Started
1111
-----------
12-
- `Installation <../../../index.html#installation?platform=gpu&version=v2.1.30%2Bxpu>`_
12+
- `Installation <https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu&version=v2.1.30%2bxpu>`_
1313
- `Quick Start <getting_started.md>`_
1414
- `Examples <examples.md>`_
1515

xpu/2.1.30+xpu/genindex.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -350,7 +350,7 @@ <h2 id="X">X</h2>
350350
Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
351351
<a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
352352
provided by <a href="https://readthedocs.org">Read the Docs</a>.
353-
<jinja2.runtime.BlockReference object at 0x7f5a22a1afa0>
353+
<jinja2.runtime.BlockReference object at 0x7f644cc8d2e0>
354354
<p></p><div><a href='https://www.intel.com/content/www/us/en/privacy/intel-cookie-notice.html' data-cookie-notice='true'>Cookies</a> <a href='https://www.intel.com/content/www/us/en/privacy/intel-privacy-notice.html'>| Privacy</a> <a href="/#" data-wap_ref="dns" id="wap_dns"><small>| Your Privacy Choices</small></a> <a href=https://www.intel.com/content/www/us/en/privacy/privacy-residents-certain-states.html data-wap_ref="nac" id="wap_nac"><small>| Notice at Collection</small></a> </div> <p></p> <div>&copy; Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document, with the sole exception that code included in this document is licensed subject to the Zero-Clause BSD open source license (OBSD), <a href='http://opensource.org/licenses/0BSD'>http://opensource.org/licenses/0BSD</a>. </div>
355355

356356

xpu/2.1.30+xpu/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ <h2>Support<a class="headerlink" href="#support" title="Permalink to this headin
175175
Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
176176
<a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
177177
provided by <a href="https://readthedocs.org">Read the Docs</a>.
178-
<jinja2.runtime.BlockReference object at 0x7f5a207313d0>
178+
<jinja2.runtime.BlockReference object at 0x7f644f1f47c0>
179179
<p></p><div><a href='https://www.intel.com/content/www/us/en/privacy/intel-cookie-notice.html' data-cookie-notice='true'>Cookies</a> <a href='https://www.intel.com/content/www/us/en/privacy/intel-privacy-notice.html'>| Privacy</a> <a href="/#" data-wap_ref="dns" id="wap_dns"><small>| Your Privacy Choices</small></a> <a href=https://www.intel.com/content/www/us/en/privacy/privacy-residents-certain-states.html data-wap_ref="nac" id="wap_nac"><small>| Notice at Collection</small></a> </div> <p></p> <div>&copy; Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document, with the sole exception that code included in this document is licensed subject to the Zero-Clause BSD open source license (OBSD), <a href='http://opensource.org/licenses/0BSD'>http://opensource.org/licenses/0BSD</a>. </div>
180180

181181

xpu/2.1.30+xpu/search.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@
127127
Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
128128
<a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
129129
provided by <a href="https://readthedocs.org">Read the Docs</a>.
130-
<jinja2.runtime.BlockReference object at 0x7f5a24fe2d60>
130+
<jinja2.runtime.BlockReference object at 0x7f644cd3f6a0>
131131
<p></p><div><a href='https://www.intel.com/content/www/us/en/privacy/intel-cookie-notice.html' data-cookie-notice='true'>Cookies</a> <a href='https://www.intel.com/content/www/us/en/privacy/intel-privacy-notice.html'>| Privacy</a> <a href="/#" data-wap_ref="dns" id="wap_dns"><small>| Your Privacy Choices</small></a> <a href=https://www.intel.com/content/www/us/en/privacy/privacy-residents-certain-states.html data-wap_ref="nac" id="wap_nac"><small>| Notice at Collection</small></a> </div> <p></p> <div>&copy; Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document, with the sole exception that code included in this document is licensed subject to the Zero-Clause BSD open source license (OBSD), <a href='http://opensource.org/licenses/0BSD'>http://opensource.org/licenses/0BSD</a>. </div>
132132

133133

0 commit comments

Comments
 (0)