Generic, queue-based, multi-GPU PyTorch inference pipeline.
- Block/patch based volume processing with optional overlap
- Multi-threaded preparation & writing stages
- Multi-GPU inference workers
- Flexible blending/trim seam handling
- Queue & system monitoring hooks
To use the software, in the root directory, run
pip install -e .
To develop the code, run
pip install -e . --group dev
Note: --group flag is available only in pip versions >=25.1
Alternatively, if using uv
, run
uv sync