Official Code Implementation for the paper "Adaptive Local Neighborhood-based Neural Networks for MR Image Reconstruction from Undersampled Data" (IEEE TCI 2024).
This repository contains the code for testing and reproducing results for the LONDN-MRI project.
The method focuses on Adaptive Local Neighborhood-based Neural Networks for MRI reconstruction. The codebase supports varying experimental setups including single-coil and multi-coil studies, leveraging both global and local modeling strategies.
This implementation is based on and extends the MODL and BLIPS (Blind Primed Supervised Learning) frameworks.
The code is organized into three main study components(For benchmark comparison, we refer to the original implementation of MODL)
- Multi-Coil Global MODL
- Multi-Coil LONDN
The core implementation is located under the multi_coil_LONDN/ directory.
📦 LONDN_MRI
┣ 📂 data # Data directory
┣ 📂 models-new # Neural network architectures
┃ ┣ 📜 networks.py # Common network utilities
┃ ┗ 📜 Unet_model_fast_mri.py # Standard UNet for FastMRI
┗ 📂 multi_coil_LONDN-new # Main Local LONDN implementation
┣ 📂 prepare_trained_model # Saved model checkpoints (e.g., index101)
┣ 📂 generated_dataset # Generated image-space datasets and masks
┃ ┣ 📂 4acceleration_mask_random3 # Random masks for training (4x accel)
┃ ┣ 📂 4acceleration_mask_test2 # Fixed masks for testing (4x accel)
┃ ┣ 📂 four_fold_image_shape # Processed 2-channel complex images
┃ ┗ 📂 test_four_fold # Test set specific data
┣ 📜 prepare_data_from_kspace.py # Script to generate image-space dataset and masks from k-space
┣ 📜 local_network_dataset.py # Data loader for noisy local neighborhood case
┗ 📜 train_local_unet.py # Training/Testing script using UNet for local reconstruction
- Python 3.8+
- PyTorch > 1.7.0
- BART: Used for generating the initial dataset.
We provide an environment.yml file to easily configure the Conda environment with all dependencies (including PyTorch, Visdom, Dominate, etc.).
# 1. Create the environment
conda env create -f environment.yml
# 2. Activate the environment
conda activate londn-mriTo reproduce the results, please download the specific k-space datasets used in our experiments.
-
Dataset:
- fastMRI Dataset website (Download will take some time)
- Stanford 2D FSE website (or download our copy via Google Drive)
-
Setup:
We recommend downloading the fastMRI dataset first, as it is the primary dataset used to generate the results inour_notebook.ipynb.Download Instructions:
- For fastMRI: Please visit the official website to obtain the license/agreement and then download the data.
- For Stanford 2D FSE: The full dataset is available on the official website. We also provide a partial dataset (subset) via Google Drive for quick testing.
⚠️ ⚠️ Once downloaded, unzip the files and place them into the project directory (e.g., inside a folder nameddataor as specified in the notebook).Step 1: Configure Path Unzip the downloaded data to your local storage (e.g.,
/mnt/DataA/NEW_KSPACE). Openmulti_coil_LONDN/prepare_data_from_kspace.pyand update theKspace_data_namevariable:# Inside prepare_data_from_kspace.py.py SOURCE_KSPACE_DIR = '/mnt/DataA/NEW_KSPACE' # <--- Change this to your path
Step 2: Generate Dataset Run the script to create the image space data based on the k-space inputs:
cd multi_coil_LONDN python prepare_data_from_kspace.py.pyThe results will be saved into '/multi_coil_LONDN/generated_dataset'.
To train the local model and test reconstruction from undersampled multi-coil k-space measurements:
python train_local_unet.pyWe provide pre-trained model weights in the checkpoints/ directory (e.g., best_net_for_index101, last_net_for_index101, etc.). These checkpoints can be used to:
- Reproduce results reported in the paper.
- Fine-tune the model on your own datasets.
The results will be saved into '/multi_coil_LONDN/checkpoints'.
(Note: Ensure you are in the multi_coil_LONDN directory when running these scripts.)
If you find this code useful, please cite our paper:
@ARTICLE{10510040,
author={Liang, Shijun and Lahiri, Anish and Ravishankar, Saiprasad},
journal={IEEE Transactions on Computational Imaging},
title={Adaptive Local Neighborhood-Based Neural Networks for MR Image Reconstruction From Undersampled Data},
year={2024},
volume={10},
number={},
pages={1235-1249},
keywords={Image reconstruction;Magnetic resonance imaging;Training;Deep learning;Adaptation models;Time measurement;Optimization;Compressed sensing;deep learning;machine learning;magnetic resonance imaging;unrolling},
doi={10.1109/TCI.2024.3394770}}For questions regarding the paper or code, please contact:
- Shijun Liang:
liangs16@msu.edu - Haijie Yuan:
yuanhai1@msu.edu - Prof. Saiprasad Ravishankar:
ravisha3@msu.edu