Skip to content

Commit 9e495a2

Browse files
Resolve merge conflicts from 7275ff7 (#2659)
Resolve merge conflicts from 7275ff7. Signed-off-by: Whitney Tsang <[email protected]>
1 parent 4382295 commit 9e495a2

File tree

1 file changed

+0
-60
lines changed

1 file changed

+0
-60
lines changed

README.md

Lines changed: 0 additions & 60 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,6 @@
66

77
This is the development repository of Intel® XPU Backend for Triton\*, a new [Triton](https://github.com/triton-lang/triton/) backend for Intel GPUs. Intel® XPU Backend for Triton\* is a out of tree backend module for [Triton](https://github.com/triton-lang/triton/blob/main/CONTRIBUTING.md) used to provide best-in-class performance and productivity on any Intel GPUs for [PyTorch](https://github.com/triton-lang/triton/blob/main/CONTRIBUTING.md) and standalone usage.
88

9-
<<<<<<< HEAD
109
# Compatibility
1110

1211
* Operating systems:
@@ -22,25 +21,11 @@ This is the development repository of Intel® XPU Backend for Triton\*, a new [T
2221
* Latest [PyTorch Prerequisites for Intel GPUs](https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpus.html)
2322

2423
Note that Intel® XPU Backend for Triton\* is not compatible with Intel® Extension for PyTorch\* and Intel® oneAPI Base Toolkit\*.
25-
=======
26-
| **`Documentation`** | **`Nightly Wheels`** |
27-
|-------------------- | -------------------- |
28-
| [![Documentation](https://github.com/triton-lang/triton/actions/workflows/documentation.yml/badge.svg)](https://triton-lang.org/) | [![Wheels](https://github.com/triton-lang/triton/actions/workflows/wheels.yml/badge.svg?branch=release/2.0.x)](https://github.com/triton-lang/triton/actions/workflows/wheels.yml) |
29-
30-
# Triton
31-
32-
This is the development repository of Triton, a language and compiler for writing highly efficient custom Deep-Learning primitives. The aim of Triton is to provide an open-source environment to write fast code at higher productivity than CUDA, but also with higher flexibility than other existing DSLs.
33-
34-
The foundations of this project are described in the following MAPL2019 publication: [Triton: An Intermediate Language and Compiler for Tiled Neural Network Computations](http://www.eecs.harvard.edu/~htk/publication/2019-mapl-tillet-kung-cox.pdf). Please consider citing this work if you use Triton!
35-
36-
The [official documentation](https://triton-lang.org) contains installation instructions and tutorials. See also these third-party [Triton puzzles](https://github.com/srush/Triton-Puzzles), which can all be run using the Triton interpreter -- no GPU required.
37-
>>>>>>> d6739d3c33dee481f2d4dee4f6ecd4123f671597
3824

3925
# Quick Installation
4026

4127
## Prerequisites
4228

43-
<<<<<<< HEAD
4429
1. Latest [Rolling Release](https://dgpu-docs.intel.com/driver/installation-rolling.html) or [Long Term Support Release](https://dgpu-docs.intel.com/driver/installation.html) of GPU driver
4530
2. Latest release of [PyTorch Prerequisites for Intel GPUs](https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpus.html)
4631
3. Latest release of [Profiling Tools Interfaces for Intel GPU (PTI for GPU)](https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpus.html)
@@ -55,35 +40,18 @@ Extract the archive and in the extracted directory execute:
5540
```shell
5641
pip install torch-*.whl triton-*.whl
5742
```
58-
=======
59-
```shell
60-
pip install triton
61-
```
62-
63-
Binary wheels are available for CPython 3.8-3.12 and PyPy 3.8-3.9.
64-
>>>>>>> d6739d3c33dee481f2d4dee4f6ecd4123f671597
6543

6644
Before using Intel® XPU Backend for Triton\* you need to initialize the toolchain.
6745
The default location is `/opt/intel/oneapi` (if installed as a `root` user) or `~/intel/oneapi` (if installed as a regular user).
6846

6947
```shell
70-
<<<<<<< HEAD
7148
# replace /opt/intel/oneapi with the actual location of PyTorch Prerequisites for Intel GPUs
7249
source /opt/intel/oneapi/setvars.sh
73-
=======
74-
pip install -U --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/Triton-Nightly/pypi/simple/ triton-nightly
75-
>>>>>>> d6739d3c33dee481f2d4dee4f6ecd4123f671597
7650
```
7751

7852
# Install from source
7953

80-
<<<<<<< HEAD
8154
## Prerequisites
82-
=======
83-
```shell
84-
git clone https://github.com/triton-lang/triton.git;
85-
cd triton;
86-
>>>>>>> d6739d3c33dee481f2d4dee4f6ecd4123f671597
8755

8856
1. Latest [Rolling Release](https://dgpu-docs.intel.com/driver/installation-rolling.html) or [Long Term Support Release](https://dgpu-docs.intel.com/driver/installation.html) of GPU driver
8957
2. Latest release of [PyTorch Prerequisites for Intel GPUs](https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpus.html)
@@ -104,14 +72,9 @@ source /opt/intel/oneapi/setvars.sh
10472
Clone this repository:
10573

10674
```shell
107-
<<<<<<< HEAD
10875
git clone https://github.com/intel/intel-xpu-backend-for-triton.git
10976
cd intel-xpu-backend-for-triton
11077
```
111-
=======
112-
git clone https://github.com/triton-lang/triton.git;
113-
cd triton;
114-
>>>>>>> d6739d3c33dee481f2d4dee4f6ecd4123f671597
11578

11679
To avoid potential conflicts with installed packages it is recommended to create and activate a new Python virtual environment:
11780

@@ -242,7 +205,6 @@ For detailed instructions on how to debug Triton's frontend, please refer to thi
242205
243206
# Usage Guide
244207
245-
<<<<<<< HEAD
246208
## Code Modifications
247209
Intel® XPU Backend for Triton\* requires a special version of PyTorch that can be built from sources or installed from nightly wheels.
248210
@@ -346,14 +308,6 @@ Note that the user needs to explicitly set `TRITON_XPU_PROFILE=1` when the user
346308
```Bash
347309
export TRITON_XPU_PROFILE=1
348310
```
349-
=======
350-
Version 2.0 is out! New features include:
351-
352-
- Many, many bug fixes
353-
- Performance improvements
354-
- Backend rewritten to use MLIR
355-
- Support for kernels that contain back-to-back matmuls (e.g., flash attention)
356-
>>>>>>> d6739d3c33dee481f2d4dee4f6ecd4123f671597
357311
358312
# Contributing
359313
@@ -363,24 +317,10 @@ Community contributions are more than welcome, whether it be to fix bugs or to a
363317

364318
_MIT License_. As found in [LICENSE](https://github.com/intel/intel-xpu-backend-for-triton/blob/main/LICENSE) file.
365319

366-
<<<<<<< HEAD
367320

368321
## Security
369322

370323
See Intel's [Security Center](https://www.intel.com/content/www/us/en/security-center/default.html)
371324
for information on how to report a potential security issue or vulnerability.
372325
373326
See also: [Security Policy](security.md)
374-
=======
375-
# Compatibility
376-
377-
Supported Platforms:
378-
379-
- Linux
380-
381-
Supported Hardware:
382-
383-
- NVIDIA GPUs (Compute Capability 8.0+)
384-
- AMD GPUs (ROCm 5.2+)
385-
- Under development: CPUs
386-
>>>>>>> d6739d3c33dee481f2d4dee4f6ecd4123f671597

0 commit comments

Comments
 (0)