You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
-60Lines changed: 0 additions & 60 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,6 @@
6
6
7
7
This is the development repository of Intel® XPU Backend for Triton\*, a new [Triton](https://github.com/triton-lang/triton/) backend for Intel GPUs. Intel® XPU Backend for Triton\* is a out of tree backend module for [Triton](https://github.com/triton-lang/triton/blob/main/CONTRIBUTING.md) used to provide best-in-class performance and productivity on any Intel GPUs for [PyTorch](https://github.com/triton-lang/triton/blob/main/CONTRIBUTING.md) and standalone usage.
8
8
9
-
<<<<<<< HEAD
10
9
# Compatibility
11
10
12
11
* Operating systems:
@@ -22,25 +21,11 @@ This is the development repository of Intel® XPU Backend for Triton\*, a new [T
22
21
* Latest [PyTorch Prerequisites for Intel GPUs](https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpus.html)
23
22
24
23
Note that Intel® XPU Backend for Triton\* is not compatible with Intel® Extension for PyTorch\* and Intel® oneAPI Base Toolkit\*.
This is the development repository of Triton, a language and compiler for writing highly efficient custom Deep-Learning primitives. The aim of Triton is to provide an open-source environment to write fast code at higher productivity than CUDA, but also with higher flexibility than other existing DSLs.
33
-
34
-
The foundations of this project are described in the following MAPL2019 publication: [Triton: An Intermediate Language and Compiler for Tiled Neural Network Computations](http://www.eecs.harvard.edu/~htk/publication/2019-mapl-tillet-kung-cox.pdf). Please consider citing this work if you use Triton!
35
-
36
-
The [official documentation](https://triton-lang.org) contains installation instructions and tutorials. See also these third-party [Triton puzzles](https://github.com/srush/Triton-Puzzles), which can all be run using the Triton interpreter -- no GPU required.
37
-
>>>>>>> d6739d3c33dee481f2d4dee4f6ecd4123f671597
38
24
39
25
# Quick Installation
40
26
41
27
## Prerequisites
42
28
43
-
<<<<<<< HEAD
44
29
1. Latest [Rolling Release](https://dgpu-docs.intel.com/driver/installation-rolling.html) or [Long Term Support Release](https://dgpu-docs.intel.com/driver/installation.html) of GPU driver
45
30
2. Latest release of [PyTorch Prerequisites for Intel GPUs](https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpus.html)
46
31
3. Latest release of [Profiling Tools Interfaces for Intel GPU (PTI for GPU)](https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpus.html)
@@ -55,35 +40,18 @@ Extract the archive and in the extracted directory execute:
55
40
```shell
56
41
pip install torch-*.whl triton-*.whl
57
42
```
58
-
=======
59
-
```shell
60
-
pip install triton
61
-
```
62
-
63
-
Binary wheels are available for CPython 3.8-3.12 and PyPy 3.8-3.9.
64
-
>>>>>>> d6739d3c33dee481f2d4dee4f6ecd4123f671597
65
43
66
44
Before using Intel® XPU Backend for Triton\* you need to initialize the toolchain.
67
45
The default location is `/opt/intel/oneapi` (if installed as a `root` user) or `~/intel/oneapi` (if installed as a regular user).
68
46
69
47
```shell
70
-
<<<<<<< HEAD
71
48
# replace /opt/intel/oneapi with the actual location of PyTorch Prerequisites for Intel GPUs
1. Latest [Rolling Release](https://dgpu-docs.intel.com/driver/installation-rolling.html) or [Long Term Support Release](https://dgpu-docs.intel.com/driver/installation.html) of GPU driver
89
57
2. Latest release of [PyTorch Prerequisites for Intel GPUs](https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpus.html)
0 commit comments