Skip to content

The code of Domain-Specific Pretraining and Fine-Tuning with Contrastive Learning for Fluorescence Microscopic Image Segmentation

Notifications You must be signed in to change notification settings

MoriLabNU/FMI-ViT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

26 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ“„ Paper

This repository contains the official implementation of our paper: Domain-Specific Pretraining and Fine-Tuning with Contrastive Learning for Fluorescence Microscopic Image Segmentation

Results

πŸš€ Highlights

  • Domain-specific pretraining: Vision Transformer pretrained on fluorescence microscopy images.
  • Cross-image foreground-background contrastive learning: Improves semantic boundary recognition and cross-dataset generalization.
  • State-of-the-art performance: Significant IoU and Dice gains over baselines, including on unseen biomarkers.

πŸ“‚ Repository Structure

FMI-ViT/
β”œβ”€β”€ pretrain/           # Code for self-supervised pretraining
β”œβ”€β”€ fine-tuning/        # Code for fine-tuning the model
└── README.md           # Project description and usage instructions

πŸ“Š Dataset Preparation

Prepare fluorescence microscopy datasets as described in the paper.

  • FMI-ViT Pretrain Data and VO Data: Public access authorization is in progress.
  • Cell Tracking Challenge: Download Link

πŸ’» Training

1. Pretraining

CUDA_VISIBLE_DEVICES=0,1,2,3 \
python -m torch.distributed.launch --nproc_per_node=4 main_dino.py \
  --arch vit_small \
  --batch_size_per_gpu 400 \
  --data_path /path/to/dataset \
  --output_dir /path/to/save_model_dir

2. Fine-tuning

bash tools/train4.sh configs/our/small_upernet_our1.py \
  --work-dir /path/to/save_dir

3. Evaluation

bash tools/test.sh configs/our/small_upernet_test.py \
  /path/to/checkpoint.pth \
  --show-dir /path/to/output_visualization \
  --work-dir /path/to/output_results \
  --out /path/to/output_predictions

πŸ“₯ Pretrained Weights

You can choose to download only the pretrained teacher backbone weights for downstream tasks, or the full checkpoint containing the backbone as well as the projection head weights for both the teacher network and the teacher network. We also provide the pretrained teacher backbone weights in the MMSegmentation Pretrained weights and fine-tuned models can be downloaded here:

arch params download
ViT-S/16 21M full ckpt | teacher backbone only | teacher backbone only (mmseg) |

More pretrained weights for additional architectures will be released gradually...

πŸ“œ Citation

If you use this repository or our pretrained weights, please cite:

@inproceedings{yourbibkey2025,
  title={Domain-Specific Pretraining and Fine-Tuning with Contrastive Learning for Fluorescence Microscopic Image Segmentation},
  author={Yunheng Wu, et al.},
  booktitle={Proceedings of ...},
  year={2025}
}

πŸ™ Acknowledgements

This repository is built upon the excellent works of:

We sincerely thank the authors for releasing their codes and making this research possible.

About

The code of Domain-Specific Pretraining and Fine-Tuning with Contrastive Learning for Fluorescence Microscopic Image Segmentation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published