You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: xpu/2.1.30+xpu/_sources/tutorials/contribution.md.txt
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ Once you implement and test your feature or bug-fix, submit a Pull Request to ht
16
16
17
17
## Developing Intel® Extension for PyTorch\* on XPU
18
18
19
-
A full set of instructions on installing Intel® Extension for PyTorch\* from source is in the [Installation document](../../../index.html#installation?platform=gpu&version=v2.1.30%2Bxpu).
19
+
A full set of instructions on installing Intel® Extension for PyTorch\* from source is in the [Installation document](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu&version=v2.1.30%2bxpu).
Copy file name to clipboardExpand all lines: xpu/2.1.30+xpu/_sources/tutorials/features/DDP.md.txt
+15-58Lines changed: 15 additions & 58 deletions
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ DistributedDataParallel (DDP)
3
3
4
4
## Introduction
5
5
6
-
`DistributedDataParallel (DDP)` is a PyTorch\* module that implements multi-process data parallelism across multiple GPUs and machines. With DDP, the model is replicated on every process, and each model replica is fed a different set of input data samples. DDP enables overlapping between gradient communication and gradient computations to speed up training. Please refer to [DDP Tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) for an introduction to DDP.
6
+
`DistributedDataParallel (DDP)` is a PyTorch\* module that implements multi-process data parallelism across multiple GPUs and machines. With DDP, the model is replicated on every process, and each model replica is fed a different set of input data samples. Please refer to [DDP Tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) for an introduction to DDP.
7
7
8
8
The PyTorch `Collective Communication (c10d)` library supports communication across processes. To run DDP on GPU, we use Intel® oneCCL Bindings for Pytorch\* (formerly known as torch-ccl) to implement the PyTorch c10d ProcessGroup API (https://github.com/intel/torch-ccl). It holds PyTorch bindings maintained by Intel for the Intel® oneAPI Collective Communications Library\* (oneCCL), a library for efficient distributed deep learning training implementing such collectives as `allreduce`, `allgather`, and `alltoall`. Refer to [oneCCL Github page](https://github.com/oneapi-src/oneCCL) for more information about oneCCL.
9
9
@@ -14,63 +14,25 @@ To use PyTorch DDP on GPU, install Intel® oneCCL Bindings for Pytorch\* as desc
14
14
### Install PyTorch and Intel® Extension for PyTorch\*
15
15
16
16
Make sure you have installed PyTorch and Intel® Extension for PyTorch\* successfully.
17
-
For more detailed information, check [installation guide](../../../../index.html#installation).
17
+
For more detailed information, check [Installation Guide](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu).
Option 2: build without oneCCL and use oneCCL in system (Recommend)
53
-
54
-
We recommend to use apt/yum/dnf to install the oneCCL package. Refer to [Base Toolkit Installation](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html) for adding the APT/YUM/DNF key and sources for first-time users.
25
+
We recommend using apt/yum/dnf to install the oneCCL package. Refer to [Base Toolkit Installation](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html) for adding the APT/YUM/DNF key and sources for first-time users.
- If torch-ccl is built with oneCCL from third party or installed from prebuilt wheel:
91
-
Dynamic link oneCCL and Intel MPI libraries:
51
+
#### Install from source
92
52
93
-
```bash
94
-
source $(python -c "import oneccl_bindings_for_pytorch as torch_ccl;print(torch_ccl.cwd)")/env/setvars.sh
95
-
```
53
+
Refer to [Installation Guide](https://github.com/intel/torch-ccl/tree/ccl_torch2.1.300+xpu?tab=readme-ov-file#install-from-source) to install Intel® oneCCL Bindings for Pytorch\* from source.
96
54
97
-
Dynamic link oneCCL only (not including Intel MPI):
55
+
### Runtime Dynamic Linking
98
56
99
-
```bash
100
-
source $(python -c "import oneccl_bindings_for_pytorch as torch_ccl;print(torch_ccl.cwd)")/env/vars.sh
101
-
```
102
57
103
-
- If torch-ccl is built without oneCCL and use oneCCL in system, dynamic link oneCCl from oneAPI basekit:
58
+
- dynamic link oneCCl from oneAPI basekit:
104
59
105
60
```bash
106
61
source <ONEAPI_ROOT>/ccl/latest/env/vars.sh
62
+
source <ONEAPI_ROOT>/mpi/latest/env/vars.sh
107
63
```
108
64
109
65
Note: Make sure you have installed [basekit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#base-kit) when using Intel® oneCCL Bindings for Pytorch\* on Intel® GPUs. If the basekit is installed with a package manager, <ONEAPI_ROOT> is `/opt/intel/oneapi`.
@@ -148,6 +104,7 @@ Dynamic link oneCCL and Intel MPI libraries:
148
104
source $(python -c "import oneccl_bindings_for_pytorch as torch_ccl;print(torch_ccl.cwd)")/env/setvars.sh
Copy file name to clipboardExpand all lines: xpu/2.1.30+xpu/_sources/tutorials/features/ipex_log.md.txt
+5-5Lines changed: 5 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -37,8 +37,8 @@ All the usage are defined in `utils/LogUtils.h`. Currently Intel® Extension for
37
37
You can use `IPEX_XXX_LOG`, XXX represents the log level as mentioned above. There are four parameters defined for simple log:
38
38
- Log component, representing which part of Intel® Extension for PyTorch\* does this log belong to.
39
39
- Log sub component, input an empty string("") for general usages. For `SYNGRAPH` you can add any log sub componment.
40
-
- Log message template format string.
41
-
- Log name.
40
+
- Log message template format string, same as fmt_string in lib fmt, `{}` is used as a place holder for format args .
41
+
- Log args for template format string, args numbers should be aligned with size of `{}`s.
42
42
43
43
Below is an example for using simple log inside abs kernel:
44
44
@@ -48,14 +48,14 @@ IPEX_INFO_LOG("OPS", "", "Add a log for inside ops {}", "abs");
48
48
49
49
```
50
50
### Event Log
51
-
Event log is used for recording a whole event, such as an operator calculation. The whole event is identified by an unique `event_id`. You can also mark each step by using `step_id`. Use `IPEX_XXX_EVENT_END()` to complete the logging of the whole event.
51
+
Event log is used for recording a whole event, such as an operator calculation. The whole event is identified by an unique `event_id`. You can also mark each step by using `step_id`. Use `IPEX_XXX_EVENT_END()` to complete the logging of the whole event. `XXX` represents the log level mentioned above. It will be used as the log level for all logs within one single log event.
52
52
53
53
Below is an example for using event log:
54
54
55
55
```c++
56
-
IPEX_EVENT_END("OPS", "", "record_avg_pool", "start", "Here record the time start with arg:{}", arg);
56
+
IPEX_EVENT_LOG("OPS", "", "record_avg_pool", "start", "Here record the time start with arg:{}", arg);
57
57
prepare_data();
58
-
IPEX_EVENT_END("OPS", "", "record_avg_pool", "data_prepare_finish", "Here record the data_prepare_finish with arg:{}", arg);
58
+
IPEX_EVENT_LOG("OPS", "", "record_avg_pool", "data_prepare_finish", "Here record the data_prepare_finish with arg:{}", arg);
59
59
avg_pool();
60
60
IPEX_INFO_EVENT_END("OPS", "", "record_avg_pool", "finish conv", "Here record the end");
Copy file name to clipboardExpand all lines: xpu/2.1.30+xpu/_sources/tutorials/features/torch_compile_gpu.md.txt
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ Intel® Extension for PyTorch\* now empowers users to seamlessly harness graph c
14
14
- `intel_extension_for_pytorch` : > v2.1.10
15
15
- `triton` : [v2.1.0](https://github.com/intel/intel-xpu-backend-for-triton/releases/tag/v2.1.0) with Intel® XPU Backend for Triton* backend enabled.
16
16
17
-
Follow [Intel® Extension for PyTorch\* Installation](https://intel.github.io/intel-extension-for-pytorch/xpu/2.1.30+xpu/tutorials/installation.html) to install `torch` and `intel_extension_for_pytorch` firstly.
17
+
Follow [Intel® Extension for PyTorch\* Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu&version=v2.1.30%2bxpu) to install `torch` and `intel_extension_for_pytorch` firstly.
18
18
19
19
Then install [Intel® XPU Backend for Triton\* backend](https://github.com/intel/intel-xpu-backend-for-triton) for `triton` package. You may install it via prebuilt wheel package or build it from the source. We recommend installing via prebuilt package:
Copy file name to clipboardExpand all lines: xpu/2.1.30+xpu/_sources/tutorials/getting_started.md.txt
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Quick Start
2
2
3
-
The following instructions assume you have installed the Intel® Extension for PyTorch\*. For installation instructions, refer to [Installation](../../../index.html#installation?platform=gpu&version=v2.1.30%2Bxpu).
3
+
The following instructions assume you have installed the Intel® Extension for PyTorch\*. For installation instructions, refer to [Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu&version=v2.1.30%2bxpu).
4
4
5
5
To start using the Intel® Extension for PyTorch\* in your code, you need to make the following changes:
Select your preferences and follow the installation instructions provided on the `Installation page <../../../index.html#installation?platform=gpu&version=v2.1.30%2Bxpu>`_.
4
+
Select your preferences and follow the installation instructions provided on the `Installation page <https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu&version=v2.1.30%2bxpu>`_.
5
5
6
6
After successful installation, refer to the `Quick Start <getting_started.md>`_ and `Examples <examples.md>`_ sections to start using the extension in your code.
0 commit comments