Skip to content

Commit ea16cdc

Browse files
authored
Update readme 2 (#2303)
Fixes #1910.
1 parent 2c57711 commit ea16cdc

File tree

1 file changed

+11
-52
lines changed

1 file changed

+11
-52
lines changed

README.md

Lines changed: 11 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -103,34 +103,17 @@ downloads a prebuilt LLVM, but you can also build LLVM from source and use that.
103103
LLVM does not have a stable API, so the Triton build will not work at an
104104
arbitrary LLVM version.
105105

106-
1. Find the version of LLVM that Triton builds against. Check
107-
`cmake/llvm-hash.txt` to see the current version. For example, if it says:
108-
49af6502c6dcb4a7f7520178bd14df396f78240c
106+
1. Find the version of LLVM that Triton builds against.
107+
Check `cmake/llvm-hash.txt` to see the current version.
109108

110-
This means that the version of Triton you have builds against
111-
[LLVM](https://github.com/llvm/llvm-project) 49af6502.
109+
2. Checkout LLVM at this revision to the directory `llvm`,
110+
which must be in the same directory as `intel-xpu-backend-for-triton`:
112111

113-
2. `git checkout` LLVM at this revision. Optionally, make additional
114-
modifications to LLVM.
112+
3. In the directory `intel-xpu-backend-for-triton`, build Triton with custom LLVM:
115113

116-
3. [Build LLVM](https://llvm.org/docs/CMake.html). For example, you might run
117-
118-
$ cd $HOME/llvm-project # your clone of LLVM.
119-
$ mkdir build
120-
$ cd build
121-
$ cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_ASSERTIONS=ON ../llvm -DLLVM_ENABLE_PROJECTS="mlir;llvm" -DLLVM_TARGETS_TO_BUILD="host;NVPTX;AMDGPU"
122-
$ ninja
123-
124-
4. Build Triton as above, but set the following environment variables.
125-
126-
# Modify as appropriate to point to your LLVM build.
127-
$ export LLVM_BUILD_DIR=$HOME/llvm-project/build
128-
129-
$ cd <triton install>
130-
$ LLVM_INCLUDE_DIRS=$LLVM_BUILD_DIR/include \
131-
LLVM_LIBRARY_DIR=$LLVM_BUILD_DIR/lib \
132-
LLVM_SYSPATH=$LLVM_BUILD_DIR \
133-
pip install -e python
114+
```shell
115+
./scripts/compile-triton.sh --llvm --triton
116+
```
134117

135118
# Tips for building
136119

@@ -223,11 +206,13 @@ For detailed instructions on how to debug Triton's frontend, please refer to thi
223206
# Usage Guide
224207
225208
## Code Modifications
226-
Intel® XPU Backend for Triton\* doesn't require any modifications and will work with PyTorch 2.4 release out of the box.
209+
Intel® XPU Backend for Triton\* requires a special version of PyTorch that can be built from sources or installed from nightly wheels.
227210
228211
1. Add `import torch` for xpu support.
229212
2. Put the tensor and models to XPU by calling `to('xpu')`.
230213
214+
This repository contains modified [tutorials](python/tutorials) that must be used with Intel® XPU Backend for Triton\*.
215+
231216
The following examples show modifications for the user code.
232217
233218
### Example 1 : Triton Kernel
@@ -285,11 +270,9 @@ print(
285270
)
286271
```
287272
288-
289273
### Example 2 : End-to-End Model
290274
Triton is transparent for end-to-end models. One could easily use `torch.compile` with `inductor` as backend by default. It will automatically generates triton kernel and gets benefit from it.
291275
292-
293276
```Python
294277
import torch
295278
from torch._dynamo.testing import rand_strided
@@ -314,10 +297,6 @@ optimized_mod = torch.compile(xpu_model)
314297
graph_result = optimized_mod(x)
315298
```
316299
317-
## More Examples on Tests
318-
If you wish to take a look at more examples, please refer to the [Unit Tests](docs/test_docs/unit_tests.md) and [End-to-End Benchmark Tests](docs/test_docs/end_to_end_tests.md).
319-
320-
321300
## Performance Analysis Guide
322301
323302
There are several ways of doing performance analysis. We recommend using `torch.profiler` for end-to-end performance analysis and using Intel® VTune™ Profiler for more detailed kernel analysis. We provide a comprehensive guide for those two:
@@ -330,30 +309,10 @@ Note that the user needs to explicitly set `TRITON_XPU_PROFILE=1` when the user
330309
export TRITON_XPU_PROFILE=1
331310
```
332311
333-
# Changelog
334-
335-
Version 2.2 is out! New features include:
336-
- Many, many bug fixes
337-
- Performance improvements for Intel GPU Max series
338-
- Support for kernels that contain back-to-back matmuls (e.g., flash attention)
339-
340312
# Contributing
341313
342314
Community contributions are more than welcome, whether it be to fix bugs or to add new features at [github](https://github.com/intel/intel-xpu-backend-for-triton). For more detailed instructions, please visit our [contributor's guide](CONTRIBUTING.md).
343315

344-
345-
# Compatibility
346-
347-
Supported Platforms:
348-
* Linux
349-
* WSL2
350-
351-
Supported Hardware:
352-
* NVIDIA GPUs (Compute Capability 7.0+)
353-
* AMD GPUs (ROCm 5.2+)
354-
* Intel GPU Max 1100/1550, Intel Flex, Intel Arc A770
355-
* Under development: CPUs
356-
357316
## License
358317

359318
_MIT License_. As found in [LICENSE](https://github.com/intel/intel-xpu-backend-for-triton/blob/main/LICENSE) file.

0 commit comments

Comments
 (0)