-
Notifications
You must be signed in to change notification settings - Fork 10
01. Installation
This guide covers all installation scenarios for TritonParse, from basic usage to full development setup.
For users who only need to generate traces and use the web interface
For contributors working on the web interface
For core contributors working on Python code
- Python >= 3.10
- Operating System: Linux, macOS, or Windows (with WSL recommended)
- CUDA (for GPU tracing): Compatible NVIDIA GPU with CUDA support
- Node.js >= 18.0.0 (for website development only)
Important: You need Triton > 3.3.1 or compiled from source.
# Install Triton from source (required)
git clone https://github.com/triton-lang/triton.git
cd triton
pip install -e .
For detailed Triton installation instructions, see the official Triton documentation.
Perfect for users who want to generate traces and use the web interface.
git clone https://github.com/pytorch-labs/tritonparse.git
cd tritonparse
# Install in development mode
pip install -e .
# Or install with test dependencies
pip install -e ".[test]"
# Test with the included example
cd tests
TORCHINDUCTOR_FX_GRAPH_CACHE=0 python test_add.py
Expected output:
Triton kernel executed successfully
Torch compiled function executed successfully
tritonparse log file list: /tmp/tmp1gan7zky/log_file_list.json
INFO:tritonparse:Copying parsed logs from /tmp/tmp1gan7zky to /scratch/findhao/tritonparse/tests/parsed_output
================================================================================
📁 TRITONPARSE PARSING RESULTS
================================================================================
📂 Parsed files directory: /scratch/findhao/tritonparse/tests/parsed_output
📊 Total files generated: 2
📄 Generated files:
--------------------------------------------------
1. 📝 dedicated_log_triton_trace_findhao__mapped.ndjson.gz (7.2KB)
2. 📝 log_file_list.json (181B)
================================================================================
✅ Parsing completed successfully!
================================================================================
- Generate trace files using the Python API
- Visit https://pytorch-labs.github.io/tritonparse/
- Load your trace files (.ndjson or .gz format)
For contributors working on the React-based web interface.
- Node.js >= 18.0.0
- npm (comes with Node.js)
Follow Option 1 first.
cd website
npm install
npm run dev
Access the development server at http://localhost:5173
# Development server
npm run dev
# Production build
npm run build
# Standalone HTML build (single file)
npm run build:single
# Linting
npm run lint
# Preview production build
npm run preview
For core contributors working on Python code, including formatting and testing.
Follow Option 1 first.
# Install all development dependencies including formatting tools
make install-dev
This installs:
-
black
- Code formatting -
usort
- Import sorting -
ruff
- Linting
# Check code formatting
make format-check
# Run linting
make lint-check
# Run tests
python -m unittest tests.test_tritonparse -v
cd website
npm install
npm run dev
# Format code
make format
# Check formatting
make format-check
# Run linting
make lint-check
# Run tests
python -m unittest tests.test_tritonparse -v
# Run specific test
python -m unittest tests.test_tritonparse.TestTritonparseCUDA.test_whole_workflow -v
cd website
# Development server
npm run dev
# Build for production
npm run build
# Lint frontend code
npm run lint
# Error: "No module named 'triton'"
# Solution: Install Triton from source
git clone https://github.com/triton-lang/triton.git
cd triton
pip install -e .
# Error: "CUDA not available"
# Check CUDA installation
python -c "import torch; print(torch.cuda.is_available())"
# If False, install CUDA-enabled PyTorch
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
# Error: Permission denied
# Use virtual environment
python -m venv tritonparse-env
source tritonparse-env/bin/activate # Linux/Mac
# or
tritonparse-env\Scripts\activate # Windows
# Error: "black not found"
# Reinstall development dependencies
make install-dev
# Or install manually
pip install black usort ruff
# Error: "Node.js version too old"
# Update Node.js to >= 18.0.0
nvm install 18
nvm use 18
# Clear cache and reinstall
rm -rf node_modules package-lock.json
npm install
Set these for development:
# Enable debug logging
export TRITONPARSE_DEBUG=1
# Enable NDJSON output (default)
export TRITONPARSE_NDJSON=1
# Enable gzip compression
export TRITON_TRACE_GZIP=1
# Custom trace directory
export TRITON_TRACE=/path/to/traces
# Disable FX graph cache (for testing)
export TORCHINDUCTOR_FX_GRAPH_CACHE=0
If you encounter issues:
- Check the Troubleshooting Guide for common solutions
- Review the FAQ for frequently asked questions
- Search GitHub Issues for existing solutions
-
Open a new issue with:
- Your system information (
python --version
,pip list
) - Complete error messages
- Steps to reproduce the issue
- Your system information (
After installation, verify everything works:
import tritonparse.structured_logging
import tritonparse.utils
# Should not raise any errors
print("TritonParse installed successfully!")
cd tests
TORCHINDUCTOR_FX_GRAPH_CACHE=0 python test_add.py
- Load the example trace file from
tests/example_output/
- Visit https://pytorch-labs.github.io/tritonparse/
- Upload and visualize the trace
# Should all pass without errors
make format-check
make lint-check
python -m unittest tests.test_tritonparse.TestTritonparseCPU -v
After successful installation:
- Read the Usage Guide to learn how to generate traces
- Explore the Web Interface Guide to master the visualization
- Check out Basic Examples for practical usage scenarios
- Join the GitHub Discussions for community support