You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Triton installation is a bit tricky as it's mandatory to have
either NVIDIA or AMD backend installed on the system (which can
also be different from version-to-version and be hardware dependent.
So here no automatic installation is provided with the commit. Instead
PYTORCH_INDEX env variable is introduced to change default CPU pytorch
nightly wheels to whatever a user desires.
Set up backend (Torch, MLIR, etc.) (note, unless `PYTORCH_INDEX` is set the script will install CPU wheels):
70
82
```bash
71
83
source setup_backend.sh
72
84
```
@@ -76,11 +88,6 @@ If you already have a working venv for Torch-MLIR, you can just install FastAPI
76
88
pip install fastapi uvicorn pytest httpx
77
89
```
78
90
79
-
To use custom builds of `torch-mlir-opt`, `mlir-opt`, etc. without placing them in your `$PATH`, configure the following environment variables:
80
-
-`TORCH_MLIR_OPT_PATH`
81
-
-`LLVM_BIN_PATH`
82
-
-`TRITON_OPT_PATH`
83
-
84
91
### Run the application
85
92
86
93
If you are reused `setup_backend.sh` script - activate the environment with
@@ -220,5 +227,4 @@ For more details about IR lowering, please see [PyTorch Lowerings](docs/pytorch_
220
227
221
228
## Integration with your frontend or backend
222
229
223
-
Refer to the [Integration Guide](docs/integration_guide.md) for details on the API contracts and communication between the frontend and backend used in this project.
224
-
230
+
Refer to the [Integration Guide](docs/integration_guide.md) for details on the API contracts and communication between the frontend and backend used in this project.
0 commit comments