You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+18-4Lines changed: 18 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,9 +24,9 @@ tracing models through various IR stages and transformations.
24
24
25
25
## Known issues and limitations
26
26
27
-
- (PyTorch) The model and input tensor must be initialized in the provided code. If multiple models are defined, it is recommended to explicitly pair each model and its input tensor using the internal __explore__(model, input) function.
27
+
- (PyTorch) The model and input tensor must be initialized in the provided code. If multiple models are defined, it is recommended to explicitly pair each model and its input tensor using the internal `__explore__(model, input)` function.
28
28
29
-
- (PyTorch) The current version does not recognize or capture user attempts to dump IR inside the input PyTorch module. It is planned that, in the future, if the user manually calls fx.export_and_import() (or similar IR-producing APIs), the app will use that IR as the base and apply the user-defined custom toolchain.
29
+
- (PyTorch) The current version does not recognize or capture user attempts to dump IR inside the input PyTorch module. It is planned that, in the future, if the user manually calls `fx.export_and_import()` (or similar IR-producing APIs), the app will use that IR as the base and apply the user-defined custom toolchain.
30
30
31
31
- (Triton) The current implementation runs Triton kernels and retrieves IR dumps from the Triton cache directory. Timeout is set to 20s.
32
32
@@ -45,11 +45,25 @@ Current version is tested on Ubuntu 22.04 windows subsystem using LLVM 21 dev.
45
45
46
46
### Install dependencies
47
47
48
+
In case of missing prerequisites here are some scripts to help set them up.
When you have venv suitable for `torch-mlir` work, install `fastapi`, `uvicorn` etc in venv like this:
57
+
58
+
```bash
59
+
pip install fastapi uvicorn pytest httpx
60
+
```
61
+
62
+
Otherwise here is the script to setup `torch`, `llvm` etc:
63
+
64
+
65
+
```bash
66
+
source setup_backend.sh
53
67
```
54
68
55
69
If you want to use your builds of the tools like `torch-mlir-opt`, `mlir-opt` etc without placing them in `PATH` please setup `TORCH_MLIR_OPT_PATH` and `LLVM_BIN_PATH` environment variables.
0 commit comments