DDTracking is a deep learning-based pipeline for diffusion MRI tractography. It covers the complete workflow—from dataset preparation and model training to fiber tracking using pre-trained models.
Download and install Miniconda for Python 3 (choose 64-bit or 32-bit based on your system):
sh Miniconda3-latest-Linux-x86_64.sh -b # Automatically accepts license agreement
Activate the base environment:
source ~/miniconda3/bin/activate
You should see
(base)appear in your terminal prompt.
conda env create -f requirements.yaml
conda activate DDTracking
Download the model weights from the GitHub release page:
cd DDTracking
wget https://github.com/yishengpoxiao/DDTracking/releases/download/v0.1/weights.tar.gz
tar -xzvf weights.tar.gz
This step prepares your diffusion data for subsequent analysis.
-
Extract the desired b-shells Use only single shell data. Refer to the
extract_bshellfunction inutilize/extract_dwi_bshell.py. -
Rigid registration to MNI space
The extracted diffusion data needs to be rigidly registered to the MNI space.
You can useutilize/register_volume_to_MNI.pyfor this step.
This process requires ANTs and FSL.Registration is performed between the FA image computed from your DWI and the MNI FA template.
Atransformfolder will be created in the DWI directory to store transformation files.Example command:
python register_volume_to_MNI.py \ -input_image <dwi_path> \ -brain_mask <brain_mask_path> \ -template <MNI_FA_template_path> \ -wm_mask <wm_mask_path>
To perform tractography using a pre-trained model:
python tractography.py --config track_config.yaml --track
Before running, make sure to modify track_config.yaml to match your input paths and desired parameters.
To train a model using your own dataset, structure your data as follows:
dataset/
├── sub-001/
│ ├── dwi/
│ │ ├── sub-001_dwi.nii.gz
│ │ ├── sub-001_dwi.bval
│ │ └── sub-001_dwi.bvec
│ ├── mask/
│ │ ├── sub-001_brain-mask.nii.gz
│ │ └── sub-001_wm_mask.nii.gz
│ └── merged_tract/
│ └── tract_whole_tract.trk
├── sub-002/
│ └── ...
Then run distributed training:
CUDA_VISIBLE_DEVICES=0,1,... torchrun --nproc_per_node=<NUM_GPUS> tractography.py --config train_config.yaml --train
Replace <NUM_GPUS> with the number of available GPUs.
This work is supported by:
-
National Key R&D Program of China (No. 2023YFE0118600)
-
National Natural Science Foundation of China (No. 62371107)
If you find DDTracking useful in your research, please cite our paper:
DDTracking: A diffusion model-based deep generative framework with local-global spatiotemporal modeling for diffusion MRI tractography Medical Image Analysis, 2026
@article{LI2026103967,
title = {DDTracking: A diffusion model-based deep generative framework with local-global spatiotemporal modeling for diffusion MRI tractography},
journal = {Medical Image Analysis},
volume = {110},
pages = {103967},
year = {2026},
issn = {1361-8415},
doi = {https://doi.org/10.1016/j.media.2026.103967},
url = {https://www.sciencedirect.com/science/article/pii/S1361841526000368},
author = {Yijie Li and Wei Zhang and Xi Zhu and Ye Wu and Yogesh Rathi and Lauren J. O'Donnell and Fan Zhang},
keywords = {Tractography, Deep learning, Diffusion model, Diffusion MRI}
}For questions, suggestions, or contributions, please create an issue or pull request on GitHub.