1- <picture >
2- <source media =" (prefers-color-scheme: dark) " srcset =" logo_dark.svg " >
3- <source media =" (prefers-color-scheme: light) " srcset =" logo_light.svg " >
4- <img alt =" CineMA logo " src =" logo_light.svg " height =" 256 " >
5- </picture >
6-
71# CineMA: A Foundation Model for Cine Cardiac MRI 🎥🫀
82
3+ <div align =" center " >
4+ <picture >
5+ <source media="(prefers-color-scheme: dark)" srcset="logo_dark.svg">
6+ <source media="(prefers-color-scheme: light)" srcset="logo_light.svg">
7+ <img alt="CineMA logo" src="logo_light.svg" height="256">
8+ </picture >
9+
910![ python] ( https://img.shields.io/badge/Python-3.11-3776AB.svg?style=flat&logo=python&logoColor=white )
1011![ pytorch] ( https://img.shields.io/badge/PyTorch-EE4C2C?style=flat&logo=pytorch&logoColor=white )
1112![ Pre-commit] ( https://github.com/mathpluscode/CineMA/actions/workflows/pre-commit.yml/badge.svg )
1213![ Pytest] ( https://github.com/mathpluscode/CineMA/actions/workflows/pytest.yml/badge.svg )
1314[ ![ codecov] ( https://codecov.io/gh/mathpluscode/CineMA/graph/badge.svg?token=MZVAOAWUPV )] ( https://codecov.io/gh/mathpluscode/CineMA )
1415[ ![ License: MIT] ( https://img.shields.io/badge/License-MIT-yellow.svg )] ( https://opensource.org/licenses/MIT )
1516
16- ## Overview
17+ </div >
18+
19+ ## 📝 Overview
20+
21+ ** CineMA** is a vision foundation model for ** Cine** cardiac magnetic resonance (CMR) imaging, built on
22+ ** M** asked-** A** utoencoder. Pre-trained on the extensive UK Biobank dataset, CineMA has been fine-tuned for various
23+ clinically relevant tasks:
1724
18- ** CineMA** is a vision foundation model for ** Cine** cardiac magnetic resonance (CMR) imaging based on
19- ** M** asked-** A** utoencoder. CineMA has been pre-trained on UK Biobank data and fine-tuned on multiple clinically
20- relevant tasks such as ventricle and myocaridum segmentation, ejection fraction (EF) regression, cardiovascular disease
21- (CVD) detection and classification, and mid-valve plane and apical landmark localization. The model has been evaluated
22- on multiple datasets, including [ ACDC] ( https://www.creatis.insa-lyon.fr/Challenge/acdc/ ) ,
25+ - 🫀 Ventricle and myocardium segmentation
26+ - 📊 Ejection fraction (EF) regression
27+ - 🏥 Cardiovascular disease (CVD) detection and classification
28+ - 📍 Mid-valve plane and apical landmark localization
29+
30+ The model has demonstrated improved or comparative performance against convolutional neural network baselines (UNet,
31+ ResNet) across multiple datasets, including [ ACDC] ( https://www.creatis.insa-lyon.fr/Challenge/acdc/ ) ,
2332[ M&Ms] ( https://www.ub.edu/mnms/ ) , [ M&Ms2] ( https://www.ub.edu/mnms-2/ ) ,
2433[ Kaggle] ( https://www.kaggle.com/c/second-annual-data-science-bowl/data ) ,
2534[ Rescan] ( https://www.ahajournals.org/doi/full/10.1161/CIRCIMAGING.119.009214 ) , and
26- [ Landmark] ( https://pubs.rsna.org/doi/10.1148/ryai.2021200197 ) , etc .
35+ [ Landmark] ( https://pubs.rsna.org/doi/10.1148/ryai.2021200197 ) .
2736
28- Check the [ demos] ( https://huggingface.co/spaces/mathpluscode/CineMA ) on Hugging Face to see example inferences!
37+ 👉 Check out our [ interactive demos] ( https://huggingface.co/spaces/mathpluscode/CineMA ) on Hugging Face to see CineMA in
38+ action!
2939
30- ## Usage
40+ ## 🚀 Getting Started
3141
3242### Installation
3343
34- You can install the package inside a [ Conda] ( https://github.com/conda-forge/miniforge ) environment using following
35- commands
36-
37- You can install with ` pip ` directly (dependencies are not installed)
44+ #### Option 1: Quick Install with pip
3845
3946``` bash
4047pip install git+https://github.com/mathpluscode/CineMA
4148```
4249
43- or you can download the source code to install (dependencies except Pytorch are installed)
50+ > Note: This method does not install dependencies automatically.
51+
52+ #### Option 2: Full Installation with Dependencies
4453
4554``` bash
4655git clone https://github.com/mathpluscode/CineMA.git
@@ -50,27 +59,37 @@ conda activate cinema
5059pip install -e .
5160```
5261
53- [ Pytorch] ( https://pytorch.org/get-started/locally/ ) should be installed separately following the official instructions.
62+ > ⚠️ ** Important** : Install [ PyTorch] ( https://pytorch.org/get-started/locally/ ) separately following the official
63+ > instructions.
5464
55- ### Use fine -tuned models
65+ ### 🎯 Using Fine -tuned Models
5666
57- The fine-tuned models have been released at [ HuggingFace] ( https://huggingface.co/mathpluscode/CineMA ) . Example inference
58- scripts are available to test these models.
67+ All fine-tuned models are available on [ HuggingFace] ( https://huggingface.co/mathpluscode/CineMA ) . Try them out using our
68+ example inference scripts:
5969
6070``` bash
71+ # Segmentation
6172python examples/inference/segmentation_sax.py
6273python examples/inference/segmentation_lax_4c.py
74+
75+ # Classification
6376python examples/inference/classification_cvd.py
6477python examples/inference/classification_sex.py
6578python examples/inference/classification_vendor.py
79+
80+ # Regression
6681python examples/inference/regression_ef.py
6782python examples/inference/regression_bmi.py
6883python examples/inference/regression_age.py
84+
85+ # Landmark Detection
6986python examples/inference/landmark_heatmap.py
7087python examples/inference/landmark_coordinate.py
7188```
7289
73- | Training Task | Input View | Input Timeframes | Inference Script |
90+ Available tasks and models are listed below.
91+
92+ | Task | Input View | Input Timeframes | Inference Script |
7493| ----------------------------------------------- | ---------------- | ---------------- | ------------------------------------------------------------------------------ |
7594| Ventricle and myocardium segmentation | SAX | 1 | [ segmentation_sax.py] ( cinema/examples/inference/segmentation_sax.py ) |
7695| Ventricle and myocardium segmentation | LAX 4C | 1 | [ segmentation_lax_4c.py] ( cinema/examples/inference/segmentation_lax_4c.py ) |
@@ -83,46 +102,50 @@ python examples/inference/landmark_coordinate.py
83102| Landmark localization by heatmap regression | LAX 2C or LAX 4C | 1 | [ landmark_heatmap.py] ( cinema/examples/inference/landmark_heatmap.py ) |
84103| Landmark localization by coordinates regression | LAX 2C or LAX 4C | 1 | [ landmark_coordinate.py] ( cinema/examples/inference/landmark_coordinate.py ) |
85104
86- ### Use pre -trained models
105+ ### 🔧 Using Pre -trained Models
87106
88- The pre-trained CineMA model backbone is available at https://huggingface.co/mathpluscode/CineMA . Following scripts
89- demonstrated how to fine-tune this backbone using
90- [ a preprocessed version of ACDC dataset] ( https://huggingface.co/datasets/mathpluscode/ACDC ) :
91-
92- ``` bash
93- python examples/train/segmentation.py
94- python examples/train/classification.py
95- python examples/train/regression.py
96- ```
107+ The pre-trained CineMA backbone is available at [ HuggingFace] ( https://huggingface.co/mathpluscode/CineMA ) . Fine-tune it
108+ using our example scripts and the [ preprocessed ACDC dataset] ( https://huggingface.co/datasets/mathpluscode/ACDC ) for
109+ following tasks:
97110
98111| Task | Fine-tuning Script |
99112| ------------------------------------- | ------------------------------------------------------------ |
100113| Ventricle and myocardium segmentation | [ segmentation.py] ( cinema/examples/train/segmentation.py ) |
101114| Cardiovascular disease classification | [ classification.py] ( cinema/examples/train/classification.py ) |
102115| Ejection fraction regression | [ regression.py] ( cinema/examples/train/regression.py ) |
103116
104- Another two scripts demonstrated the masking and prediction process of MAE and the feature extraction from MAE.
117+ The commandlines are:
118+
119+ ``` bash
120+ # Fine-tuning Scripts
121+ python examples/train/segmentation.py
122+ python examples/train/classification.py
123+ python examples/train/regression.py
124+ ```
125+
126+ You can also explore the reconstruction performance of extract features using following example scripts.
105127
106128``` bash
129+ # MAE Examples
107130python examples/inference/mae.py
108131python examples/inference/mae_feature_extraction.py
109132```
110133
111- For fine-tuning CineMA on other datasets, pre-process can be performed using the provided scripts following the
112- documentations. Note that it is recommended to download the data under ` ~/.cache/cinema_datasets ` as the integration
113- tests uses this path . For instance, the mnms preprocessed data would be ` ~/.cache/cinema_datasets/mnms/processed ` .
114- Otherwise define the path using environment variable ` CINEMA_DATA_DIR ` .
115-
116- | Training Data | Documentations |
117- | ------------- | -------------------------------------------- |
118- | ACDC | [ README.md] ( cinema/data/acdc/README.md ) |
119- | M&Ms | [ README.md] ( cinema/data/mnms/README.md ) |
120- | M&Ms2 | [ README.md] ( cinema/data/mnms2/README.md ) |
121- | Kaggle | [ README.md] ( cinema/data/kaggle/README.md ) |
122- | Rescan | [ README.md] ( cinema/data/rescan/README.md ) |
123- | Landmark | [ README.md] ( cinema/data/landmark/README.md ) |
124- | EMIDEC | [ README.md] ( cinema/data/emidec/README.md ) |
125- | Myops2020 | [ README.md] ( cinema/data/myops2020/README.md ) |
134+ ### 📚 Dataset Support
135+
136+ CineMA supports multiple datasets . For optimal integration, store data in ` ~/.cache/cinema_datasets ` or set
137+ ` CINEMA_DATA_DIR ` environment variable.
138+
139+ | Dataset | Documentation |
140+ | --------- | -------------------------------------------- |
141+ | ACDC | [ README.md] ( cinema/data/acdc/README.md ) |
142+ | M&Ms | [ README.md] ( cinema/data/mnms/README.md ) |
143+ | M&Ms2 | [ README.md] ( cinema/data/mnms2/README.md ) |
144+ | Kaggle | [ README.md] ( cinema/data/kaggle/README.md ) |
145+ | Rescan | [ README.md] ( cinema/data/rescan/README.md ) |
146+ | Landmark | [ README.md] ( cinema/data/landmark/README.md ) |
147+ | EMIDEC | [ README.md] ( cinema/data/emidec/README.md ) |
148+ | Myops2020 | [ README.md] ( cinema/data/myops2020/README.md ) |
126149
127150The code for training and evaluating models on these datasets are available.
128151
@@ -144,20 +167,19 @@ The code for training and evaluating models on these datasets are available.
144167| Landmark localization by heatmap regression | Landmark | [ cinema/segmentation/landmark/README.md] ( cinema/segmentation/landmark/README.md ) |
145168| Landmark localization by coordinates regression | Landmark | [ cinema/regression/landmark/README.md] ( cinema/regression/landmark/README.md ) |
146169
147- ### Train your own foundation model
170+ ### 🏗️ Training Your Own Foundation Model
148171
149- A simplified example script for launch masked autoencoder pretraining is provided:
150- [ pretrain.py] ( cinema/examples/train/pretrain.py ) . For DDP supported pretraining, check
151- [ cinema/mae/pretrain.py] ( cinema/mae/pretrain.py ) . Check [ examples/dicom_to_nifti.py] ( cinema/examples/dicom_to_nifti.py )
152- for UKB data preprocessing.
172+ Start with our simplified pretraining script:
153173
154174``` bash
155175python examples/train/pretrain.py
156176```
157177
158- ## References
178+ For distributed training support, check [ cinema/mae/pretrain.py] ( cinema/mae/pretrain.py ) .
179+
180+ ## 📖 References
159181
160- The code is built upon several open-source projects:
182+ CineMA builds upon these open-source projects:
161183
162184- [ UK Biobank Cardiac Preprocessing] ( https://github.com/baiwenjia/ukbb_cardiac )
163185- [ Masked Autoencoders] ( https://github.com/facebookresearch/mae )
@@ -167,12 +189,15 @@ The code is built upon several open-source projects:
167189- [ PyTorch Vision] ( https://github.com/pytorch/vision )
168190- [ PyTorch Image Models] ( https://github.com/huggingface/pytorch-image-models )
169191
170- ## Acknowledgement
192+ ## 🤝 Contributing
193+
194+ We welcome contributions! Please [ create an issue] ( https://github.com/mathpluscode/CineMA/issues/new ) for questions or
195+ suggestions.
171196
172- ## Contact
197+ ## 📧 Contact
173198
174- For any questions or suggestions, please [ create an issue ] ( https://github.com/mathpluscode/CineMA/issues/new ) .
199+ For
collaborations, reach out to Yunguan Fu ( [email protected] ).
175200
176- For collaborations, please contact Yunguan Fu ( [email protected] ). 201+ ## 📄 Citation
177202
178- ## Citation
203+ [ Citation information to be added ]
0 commit comments