Code for [CDMRI 2025] "Streamline Signature Net(SSN): Efficient White Matter Pathway Recognition for Bundles Parcellation Using Path Signature", Computational Diffusion MRI https://link.springer.com/chapter/10.1007/978-3-032-12837-9_4
Uploaded all files. But comments and instructions are still under developing.
This repository contains code under two licenses:
- File in
src/new/lars.pyis licensed under Apache-2.0, see LICENSE.Apache-2.0 - Files in
src/exceptsrc/new/lars.pyare licensed under BSD-3-Clause, see LICENSE.BSD-3-Clause
conda create --name SSN python=3.12 conda activate SSN
pip install git+https://github.com/SlicerDMRI/whitematteranalysis.git
pip install torch torchvision torchaudio pip install argparse signatory h5py matplotlib scikit-learn==1.5.2 conda install -c conda-forge libstdcxx-ng=13
git clone https://github.com/RenchZhao/Streamline_Signature_Net.git cd Streamline_Signature_Net
conda create -n pnlpipe3 python=3.6 conda install -c mrtrix3 mrtrix3
When using MRtrix3's tckconvert to generate VTK files from TCK tractography, the output coordinates are in LPS (Left-Posterior-Superior) coordinate system. However, the whitematteranalysis (wma) library reads these coordinates and labels them as RAS (Right-Anterior-Superior) without actual transformation.
The original inference.py contains a "bug-that-works-by-accident":
def RAS2LPS_transform(points):
points[:,:,0] = -points[:,:,0] # In-place modification!
points[:,:,1] = -points[:,:,1] # In-place modification!
return pointsThis in-place modification accidentally converts LPS→RAS when data.numpy() is saved, making the output VTK files loadable by dipy. However, this is incorrect design because:
- It relies on unintended side effects (numpy view sharing)
- It modifies data during inference instead of preprocessing
- It breaks when using global index extraction (like in StreamlineBased migration)
Modify InferVtkDataset to apply coordinate transformation during initialization:
class InferVtkDataset(data.Dataset):
def __init__(self, vtk_dataset, script_name='<inference>', normalization=False,
data_augmentation=False, transform=None):
pd_tract = wma.io.read_polydata(vtk_dataset)
# Apply transform during feature generation (LPS→RAS conversion)
if transform:
self.features = np.asarray(transform(gen_features(pd_tract)))
else:
self.features = np.asarray(gen_features(pd_tract))
# ... rest of initializationThen in inference calls:
- Pass the coordinate transform to
InferVtkDataset - Set
data_transform=Noneinmodel_inference()to avoid in-place modification during inference
# Define LPS→RAS transform
def LPS_to_RAS_transform(points):
feat = points.copy()
feat[:,:,0] = -points[:,:,0] # L → R
feat[:,:,1] = -points[:,:,1] # P → A
return feat
# Create dataset with transform
test_dataset = InferVtkDataset(input_vtk, transform=LPS_to_RAS_transform)
# Run inference without transform
test_predicted_lst, pred_time, streamline_di = model_inference(
model, test_data_loader, streamline_di, label_names,
script_name, logger, device, thresh,
data_transform=None, # No transform during inference!
result_transform=result_transform
)This approach:
- ✅ Properly separates preprocessing from inference
- ✅ Avoids in-place modification side effects
- ✅ Makes the coordinate system explicit and traceable
- ✅ Works correctly with global index extraction
ImportError: /home/user/anaconda3/envs/SSN/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.32' not found (required by /home/user/anaconda3/envs/SSN/lib/python3.12/site-packages/signatory/_impl.cpython-312-x86_64-linux-gnu.so) run: conda install -c conda-forge libstdcxx-ng=13
"cross_val_txt" variable in new/gen_train_h5.py contains subjects of 5-fold cross validation of HCP dataset: https://zenodo.org/records/1285152 You can copy contents below without quotation mark to your own "5_fold.txt": ''' fold1 = ['992774', '991267', '987983', '984472', '983773', '979984', '978578', '965771', '965367', '959574', '958976', '957974', '951457', '932554', '930449', '922854', '917255', '912447', '910241', '907656', '904044'] fold2 = ['901442', '901139', '901038', '899885', '898176', '896879', '896778', '894673', '889579', '887373', '877269', '877168', '872764', '872158', '871964', '871762', '865363', '861456', '859671', '857263', '856766'] fold3 = ['849971', '845458', '837964', '837560', '833249', '833148', '826454', '826353', '816653', '814649', '802844', '792766', '792564', '789373', '786569', '784565', '782561', '779370', '771354', '770352', '765056'] fold4 = ['761957', '759869', '756055', '753251', '751348', '749361', '748662', '748258', '742549', '734045', '732243', '729557', '729254', '715647', '715041', '709551', '705341', '704238', '702133', '695768', '690152'] fold5 = ['687163', '685058', '683256', '680957', '679568', '677968', '673455', '672756', '665254', '654754', '645551', '644044', '638049', '627549', '623844', '622236', '620434', '613538', '601127', '599671', '599469'] '''
new/get_label_dict.py is used to generate "label_di" variable in new/gen_train_h5.py
@inproceedings{zhao_streamline_2026, address = {Cham}, title = {Streamline {Signature} {Net} ({SSN}): {Efficient} {White} {Matter} {Pathway} {Recognition} for {Bundles} {Parcellation} {Using} {Path} {Signature}}, isbn = {978-3-032-12837-9}, shorttitle = {Streamline {Signature} {Net} ({SSN})}, doi = {10.1007/978-3-032-12837-9_4}, language = {en}, booktitle = {Computational {Diffusion} {MRI}}, publisher = {Springer Nature Switzerland}, author = {Zhao, Renzhi and Zhang, Xin and Tan, Zihao and Xu, Jiakun and Yang, Zhenyu and Wu, Ye and Xu, Xiangmin}, editor = {Chamberland, Maxime and Chen, Yuqian and Filipiak, Patryk and Hendriks, Tom and Lv, Jinglei and Shailja, S. and Thompson, Elinor}, year = {2026}, keywords = {Diffusion MRI, Path Signature, Bundles Parcellation, Tractogram}, pages = {30--43}, }