1- <picture >
2- <source media =" (prefers-color-scheme: dark) " srcset =" logo_dark.svg " >
3- <source media =" (prefers-color-scheme: light) " srcset =" logo_light.svg " >
4- <img alt =" CineMA logo " src =" logo_light.svg " height =" 256 " >
5- </picture >
6-
7- # CineMA: A Foundation Model for Cine Cardiac MRI 🎥🫀
1+ <div align =" center " >
2+ <picture >
3+ <source media="(prefers-color-scheme: dark)" srcset="logo_dark.svg">
4+ <source media="(prefers-color-scheme: light)" srcset="logo_light.svg">
5+ <img alt="CineMA logo" src="logo_light.svg" height="256">
6+ </picture >
87
98![ python] ( https://img.shields.io/badge/Python-3.11-3776AB.svg?style=flat&logo=python&logoColor=white )
109![ pytorch] ( https://img.shields.io/badge/PyTorch-EE4C2C?style=flat&logo=pytorch&logoColor=white )
1312[ ![ codecov] ( https://codecov.io/gh/mathpluscode/CineMA/graph/badge.svg?token=MZVAOAWUPV )] ( https://codecov.io/gh/mathpluscode/CineMA )
1413[ ![ License: MIT] ( https://img.shields.io/badge/License-MIT-yellow.svg )] ( https://opensource.org/licenses/MIT )
1514
16- ## Overview
15+ </div >
16+
17+ # CineMA: A Foundation Model for Cine Cardiac MRI 🎥🫀
18+
19+ Check out our [ interactive demos] ( https://huggingface.co/spaces/mathpluscode/CineMA ) on Hugging Face to see CineMA in
20+ action!
21+
22+ ## 📝 Overview
23+
24+ ** CineMA** is a vision foundation model for ** Cine** cardiac magnetic resonance (CMR) imaging, built on
25+ ** M** asked-** A** utoencoder. Pre-trained on the extensive UK Biobank dataset, CineMA has been fine-tuned for various
26+ clinically relevant tasks:
27+
28+ - 🫀 Ventricle and myocardium segmentation
29+ - 📊 Ejection fraction (EF) regression
30+ - 🏥 Cardiovascular disease (CVD) detection and classification
31+ - 📍 Mid-valve plane and apical landmark localization
1732
18- ** CineMA** is a vision foundation model for ** Cine** cardiac magnetic resonance (CMR) imaging based on
19- ** M** asked-** A** utoencoder. CineMA has been pre-trained on UK Biobank data and fine-tuned on multiple clinically
20- relevant tasks such as ventricle and myocaridum segmentation, ejection fraction (EF) regression, cardiovascular disease
21- (CVD) detection and classification, and mid-valve plane and apical landmark localization. The model has been evaluated
22- on multiple datasets, including [ ACDC] ( https://www.creatis.insa-lyon.fr/Challenge/acdc/ ) ,
33+ The model has demonstrated improved or comparative performance against convolutional neural network baselines (UNet,
34+ ResNet) across multiple datasets, including [ ACDC] ( https://www.creatis.insa-lyon.fr/Challenge/acdc/ ) ,
2335[ M&Ms] ( https://www.ub.edu/mnms/ ) , [ M&Ms2] ( https://www.ub.edu/mnms-2/ ) ,
2436[ Kaggle] ( https://www.kaggle.com/c/second-annual-data-science-bowl/data ) ,
2537[ Rescan] ( https://www.ahajournals.org/doi/full/10.1161/CIRCIMAGING.119.009214 ) , and
26- [ Landmark] ( https://pubs.rsna.org/doi/10.1148/ryai.2021200197 ) , etc .
38+ [ Landmark] ( https://pubs.rsna.org/doi/10.1148/ryai.2021200197 ) .
2739
28- Check the [ demos] ( https://huggingface.co/spaces/mathpluscode/CineMA ) on Hugging Face to see example inferences!
29-
30- ## Usage
40+ ## 🚀 Getting Started
3141
3242### Installation
3343
34- You can install the package inside a [ Conda] ( https://github.com/conda-forge/miniforge ) environment using following
35- commands
36-
37- You can install with ` pip ` directly (dependencies are not installed)
44+ #### Option 1: Quick Install with pip
3845
3946``` bash
4047pip install git+https://github.com/mathpluscode/CineMA
4148```
4249
43- or you can download the source code to install (dependencies except Pytorch are installed)
50+ > Note: This method does not install dependencies automatically.
51+
52+ #### Option 2: Full Installation with Dependencies
4453
4554``` bash
4655git clone https://github.com/mathpluscode/CineMA.git
@@ -50,27 +59,37 @@ conda activate cinema
5059pip install -e .
5160```
5261
53- [ Pytorch] ( https://pytorch.org/get-started/locally/ ) should be installed separately following the official instructions.
62+ > ⚠️ ** Important** : Install [ PyTorch] ( https://pytorch.org/get-started/locally/ ) separately following the official
63+ > instructions.
5464
55- ### Use fine -tuned models
65+ ### 🎯 Using Fine -tuned Models
5666
57- The fine-tuned models have been released at [ HuggingFace] ( https://huggingface.co/mathpluscode/CineMA ) . Example inference
58- scripts are available to test these models.
67+ All fine-tuned models are available on [ HuggingFace] ( https://huggingface.co/mathpluscode/CineMA ) . Try them out using our
68+ example inference scripts:
5969
6070``` bash
71+ # Segmentation
6172python examples/inference/segmentation_sax.py
6273python examples/inference/segmentation_lax_4c.py
74+
75+ # Classification
6376python examples/inference/classification_cvd.py
6477python examples/inference/classification_sex.py
6578python examples/inference/classification_vendor.py
79+
80+ # Regression
6681python examples/inference/regression_ef.py
6782python examples/inference/regression_bmi.py
6883python examples/inference/regression_age.py
84+
85+ # Landmark Detection
6986python examples/inference/landmark_heatmap.py
7087python examples/inference/landmark_coordinate.py
7188```
7289
73- | Training Task | Input View | Input Timeframes | Inference Script |
90+ Available tasks and models are listed below.
91+
92+ | Task | Input View | Input Timeframes | Inference Script |
7493| ----------------------------------------------- | ---------------- | ---------------- | ------------------------------------------------------------------------------ |
7594| Ventricle and myocardium segmentation | SAX | 1 | [ segmentation_sax.py] ( cinema/examples/inference/segmentation_sax.py ) |
7695| Ventricle and myocardium segmentation | LAX 4C | 1 | [ segmentation_lax_4c.py] ( cinema/examples/inference/segmentation_lax_4c.py ) |
@@ -83,46 +102,52 @@ python examples/inference/landmark_coordinate.py
83102| Landmark localization by heatmap regression | LAX 2C or LAX 4C | 1 | [ landmark_heatmap.py] ( cinema/examples/inference/landmark_heatmap.py ) |
84103| Landmark localization by coordinates regression | LAX 2C or LAX 4C | 1 | [ landmark_coordinate.py] ( cinema/examples/inference/landmark_coordinate.py ) |
85104
86- ### Use pre -trained models
105+ ### 🔧 Using Pre -trained Models
87106
88- The pre-trained CineMA model backbone is available at https://huggingface.co/mathpluscode/CineMA . Following scripts
89- demonstrated how to fine-tune this backbone using
90- [ a preprocessed version of ACDC dataset] ( https://huggingface.co/datasets/mathpluscode/ACDC ) :
91-
92- ``` bash
93- python examples/train/segmentation.py
94- python examples/train/classification.py
95- python examples/train/regression.py
96- ```
107+ The pre-trained CineMA backbone is available at [ HuggingFace] ( https://huggingface.co/mathpluscode/CineMA ) . Fine-tune it
108+ using our example scripts and the [ preprocessed ACDC dataset] ( https://huggingface.co/datasets/mathpluscode/ACDC ) for
109+ following tasks:
97110
98111| Task | Fine-tuning Script |
99112| ------------------------------------- | ------------------------------------------------------------ |
100113| Ventricle and myocardium segmentation | [ segmentation.py] ( cinema/examples/train/segmentation.py ) |
101114| Cardiovascular disease classification | [ classification.py] ( cinema/examples/train/classification.py ) |
102115| Ejection fraction regression | [ regression.py] ( cinema/examples/train/regression.py ) |
103116
104- Another two scripts demonstrated the masking and prediction process of MAE and the feature extraction from MAE.
117+ The commandlines are:
105118
106119``` bash
120+ # Fine-tuning Scripts
121+ python examples/train/segmentation.py
122+ python examples/train/classification.py
123+ python examples/train/regression.py
124+ ```
125+
126+ You can also explore the reconstruction performance of extract features using following example scripts.
127+
128+ ``` bash
129+ # MAE Examples
107130python examples/inference/mae.py
108131python examples/inference/mae_feature_extraction.py
109132```
110133
134+ ### 📚 Dataset Support
135+
111136For fine-tuning CineMA on other datasets, pre-process can be performed using the provided scripts following the
112- documentations. Note that it is recommended to download the data under ` ~/.cache/cinema_datasets ` as the integration
113- tests uses this path. For instance, the mnms preprocessed data would be ` ~/.cache/cinema_datasets/mnms/processed ` .
114- Otherwise define the path using environment variable ` CINEMA_DATA_DIR ` .
115-
116- | Training Data | Documentations |
117- | ------------- | -------------------------------------------- |
118- | ACDC | [ README.md] ( cinema/data/acdc/README.md ) |
119- | M&Ms | [ README.md] ( cinema/data/mnms/README.md ) |
120- | M&Ms2 | [ README.md] ( cinema/data/mnms2/README.md ) |
121- | Kaggle | [ README.md] ( cinema/data/kaggle/README.md ) |
122- | Rescan | [ README.md] ( cinema/data/rescan/README.md ) |
123- | Landmark | [ README.md] ( cinema/data/landmark/README.md ) |
124- | EMIDEC | [ README.md] ( cinema/data/emidec/README.md ) |
125- | Myops2020 | [ README.md] ( cinema/data/myops2020/README.md ) |
137+ documentations. It is recommended to download the data under ` ~/.cache/cinema_datasets ` as the integration tests uses
138+ this path. For instance, the mnms preprocessed data would be ` ~/.cache/cinema_datasets/mnms/processed ` . Otherwise define
139+ the path using environment variable ` CINEMA_DATA_DIR ` .
140+
141+ | Dataset | Documentation |
142+ | --------- | -------------------------------------------- |
143+ | ACDC | [ README.md] ( cinema/data/acdc/README.md ) |
144+ | M&Ms | [ README.md] ( cinema/data/mnms/README.md ) |
145+ | M&Ms2 | [ README.md] ( cinema/data/mnms2/README.md ) |
146+ | Kaggle | [ README.md] ( cinema/data/kaggle/README.md ) |
147+ | Rescan | [ README.md] ( cinema/data/rescan/README.md ) |
148+ | Landmark | [ README.md] ( cinema/data/landmark/README.md ) |
149+ | EMIDEC | [ README.md] ( cinema/data/emidec/README.md ) |
150+ | Myops2020 | [ README.md] ( cinema/data/myops2020/README.md ) |
126151
127152The code for training and evaluating models on these datasets are available.
128153
@@ -144,20 +169,19 @@ The code for training and evaluating models on these datasets are available.
144169| Landmark localization by heatmap regression | Landmark | [ cinema/segmentation/landmark/README.md] ( cinema/segmentation/landmark/README.md ) |
145170| Landmark localization by coordinates regression | Landmark | [ cinema/regression/landmark/README.md] ( cinema/regression/landmark/README.md ) |
146171
147- ### Train your own foundation model
172+ ### 🏗️ Training Your Own Foundation Model
148173
149- A simplified example script for launch masked autoencoder pretraining is provided:
150- [ pretrain.py] ( cinema/examples/train/pretrain.py ) . For DDP supported pretraining, check
151- [ cinema/mae/pretrain.py] ( cinema/mae/pretrain.py ) . Check [ examples/dicom_to_nifti.py] ( cinema/examples/dicom_to_nifti.py )
152- for UKB data preprocessing.
174+ Start with our simplified pretraining script:
153175
154176``` bash
155177python examples/train/pretrain.py
156178```
157179
158- ## References
180+ For distributed training support, check [ cinema/mae/pretrain.py] ( cinema/mae/pretrain.py ) .
181+
182+ ## 📖 References
159183
160- The code is built upon several open-source projects:
184+ CineMA builds upon these open-source projects:
161185
162186- [ UK Biobank Cardiac Preprocessing] ( https://github.com/baiwenjia/ukbb_cardiac )
163187- [ Masked Autoencoders] ( https://github.com/facebookresearch/mae )
@@ -167,12 +191,15 @@ The code is built upon several open-source projects:
167191- [ PyTorch Vision] ( https://github.com/pytorch/vision )
168192- [ PyTorch Image Models] ( https://github.com/huggingface/pytorch-image-models )
169193
170- ## Acknowledgement
194+ ## 🤝 Contributing
195+
196+ We welcome contributions! Please [ create an issue] ( https://github.com/mathpluscode/CineMA/issues/new ) for questions or
197+ suggestions.
171198
172- ## Contact
199+ ## 📧 Contact
173200
174- For any questions or suggestions, please [ create an issue ] ( https://github.com/mathpluscode/CineMA/issues/new ) .
201+ For
collaborations, reach out to Yunguan Fu ( [email protected] ).
175202
176- For collaborations, please contact Yunguan Fu ( [email protected] ). 203+ ## 📄 Citation
177204
178- ## Citation
205+ [ Citation information to be added ]
0 commit comments