An interactive web-based tool for exploring intermediate representations (IRs) of PyTorch and Triton models. Designed to help developers, researchers, and students visualize and understand compilation pipelines by tracing models through various IR stages and transformations.
- Live editing of PyTorch, Triton models and raw IR input
- Pre-defined lowering IR support:
- TorchScript Graph IR
- Torch MLIR (and TOSA, Linalg, StableHLO dialects)
- LLVM MLIR and LLVM IR
- Triton IRs (TTIR, TTGIR, LLVM IR, NVPTX)
- Customizable compiler pipelines with toolchain steps like:
torch-mlir-optmlir-optmlir-translateopt,llc, or any external tool via$PATH
- Visual pipeline builder to control and inspect transformation flow
- IR viewer with syntax highlighting
- Side-by-side IR windows
- "Print after all opts" toggle to inspect intermediate outputs
-
(PyTorch) The model and input tensor must be initialized in the provided code. If multiple models are defined, it is recommended to explicitly pair each model and its input tensor using the internal
__explore__(model, input)function. -
(Triton) The current implementation runs Triton kernels and retrieves IR dumps from the Triton cache directory. Timeout is set to 20s.
- Python 3.11+
- Node.js + npm
- PyTorch
- Torch-MLIR
- LLVM with mlir-opt
- Triton
- graphviz - needed in case if you want PytorchExplorer to get CFG from LLVM IR in a form of pdf.
To setup PyTorch and Torch-MLIR it's a good idea to visit https://github.com/llvm/torch-mlir repository and follow instructions from there.
Current version of the application is tested on Ubuntu 22.04 windows subsystem using LLVM 22 dev.
Triton requires that PyTorch be compiled with CUDA or ROCm support. When installing PyTorch, pick the desired accelerator build. For example, to install a CUDA 12.8 wheel you can run (note: this is not included in scripts and dockerfiles) (at least this works with my Blackwell GPU):
pip install --pre torch torchvision --extra-index-url https://download.pytorch.org/whl/cu128Clone the repository:
git clone https://github.com/MrSidims/PytorchExplorer.git
cd PytorchExplorerTo use custom builds of torch-mlir-opt, mlir-opt, etc. without placing them in your $PATH, configure the following environment variables:
TORCH_MLIR_OPT_PATHLLVM_BIN_PATHTRITON_OPT_PATHPYTORCH_INDEX– Index URL for installing PyTorch. Defaults to nightly CPU wheels.
For example, to install CUDA-enabled nightly wheels (CUDA 12.8):
PYTORCH_INDEX=https://download.pytorch.org/whl/nightly/cu128 \
source setup_backend.shInstall frontend dependencies:
source setup_frontend.shSet up backend (Torch, MLIR, etc.) (note, unless PYTORCH_INDEX is set the script will install CPU wheels):
source setup_backend.shIf you already have a working venv for Torch-MLIR, you can just install FastAPI and testing dependencies:
pip install fastapi uvicorn pytest httpx PyPDF2If you are reused setup_backend.sh script - activate the environment with
source mlir_venv/bin/activatenpm run dev:allnpm run build
npm run start:allThen open http://localhost:3000/ in your browser and enjoy!
Start the backend on the machine that has all compiler tools installed:
npm run start:api # or npm run dev:api for developmentOn the machine running the UI, point the frontend to that backend via the
NEXT_PUBLIC_BACKEND_URL environment variable and start only the UI part:
export NEXT_PUBLIC_BACKEND_URL=http://<backend-host>:8000
npm run dev:ui # or npm run start:ui after `npm run build`Build the single image (change APP_ENV between development/production, default is production):
docker build -t pytorch_explorer --build-arg APP_ENV=development .Alternatively build dedicated images for the UI and API:
docker build -f Dockerfile.backend -t pytorch_explorer_backend .
docker build -f Dockerfile.frontend -t pytorch_explorer_frontend .Run the container in production mode:
docker run -p 3000:3000 -p 8000:8000 pytorch_explorerTo run in development mode:
docker run -it --rm \
-e NODE_ENV=development \
-p 3000:3000 -p 8000:8000 \
pytorch_explorerTo run the UI and API in separate containers using docker compose:
docker compose build
docker compose upSecure run (in cases, when you don't trust tested samples):
podman run --rm -it \
--read-only \
--cap-drop=ALL \
--security-opt=no-new-privileges \
--tmpfs /app/.next:rw,size=256m \
-v stored_sessions:/app/StoredSessions:rw \
-p8000:8000 -p3000:3000 \
-e NODE_ENV=production \
pytorch_explorerWith the backend running you can execute the Python tests. Point them at the
backend via the optional API_URL environment variable if it isn't on
localhost:8000:
API_URL=http://<backend-host>:8000 pytest tests -vThe interface features a code editor on the left and one or more IR windows on the right.
- Choose PyTorch, Triton or Raw IR from the language selector above the editor and enter your code. Use Add Source to work with multiple snippets at once.
- Each window on the right picks a target IR from its drop‑down. Create extra windows via Add IR Window and switch between vertical or horizontal layout using the layout selector.
- Click Add Pass inside a window to build a custom pipeline with tools such
as
torch-mlir-opt,mlir-opt,mlir-translate,opt,llcor a user-specified tool. Toggle Print IR after opts to see intermediate IR and use the magnifying glass button to inspect a single stage. - Press Generate IR on All Windows to compile the active source and fill each window with the resulting IR. Windows can be collapsed or closed individually.
- Hit Store Session to save your work. The backend returns a short ID which
can be appended to the URL (e.g.
/abc123) to reload the same session later. - It's possible to build CFG into pdf file for LLVM IR, just call standart for LLVM opt --passes=dot-cfg and CFG will be rendered in the output window.
The app uses fx.export_and_import under the hood to inpect IR output for PyTorch, therefore for pre-defined lowering paths it's required for a module to have forward method.
Lowering to LLVM IR goes through:
module = fx.export_and_import(model, example_input, output_type=OutputType.LINALG_ON_TENSORS)
mlir-opt --one-shot-bufferize="bufferize-function-boundaries"
-convert-linalg-to-loops
-convert-scf-to-cf
-convert-cf-to-llvm
-lower-affine
-finalize-memref-to-llvm
-convert-math-to-llvm
-convert-math-to-llvm
-convert-func-to-llvm
-reconcile-unrealized-casts
str(module) -o output.mlir
mlir-translate --mlir-to-llvmir output.mlirFor more details about IR lowering, please see PyTorch Lowerings.
Refer to the Integration Guide for details on the API contracts and communication between the frontend and backend used in this project.