Skip to content

Commit f19bb36

Browse files
authored
Update README.md (#114)
Fix napari-hub shield in README Update News and Citation with eLife data Update README.md
1 parent 6de4b86 commit f19bb36

File tree

1 file changed

+31
-19
lines changed

1 file changed

+31
-19
lines changed

README.md

Lines changed: 31 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# CellSeg3D: self-supervised (and supervised) 3D cell segmentation, primarily for mesoSPIM data!
2-
[![napari hub](https://img.shields.io/endpoint?url=https://api.napari-hub.org/shields/napari_cellseg3d)](https://www.napari-hub.org/plugins/napari_cellseg3d)
2+
[![napari hub](https://img.shields.io/endpoint?url=https://api.napari-hub.org/shields/napari-cellseg3d)](https://www.napari-hub.org/plugins/napari_cellseg3d)
33
[![PyPI](https://img.shields.io/pypi/v/napari-cellseg3d.svg?color=green)](https://pypi.org/project/napari-cellseg3d)
44
[![Downloads](https://static.pepy.tech/badge/napari-cellseg3d)](https://pepy.tech/project/napari-cellseg3d)
55
[![Downloads](https://static.pepy.tech/badge/napari-cellseg3d/month)](https://pepy.tech/project/napari-cellseg3d)
@@ -22,9 +22,7 @@
2222

2323
## Documentation
2424

25-
📚 Documentation is available at [https://AdaptiveMotorControlLab.github.io/CellSeg3D
26-
](https://adaptivemotorcontrollab.github.io/CellSeg3D/welcome.html)
27-
25+
📚 Documentation is available at [https://AdaptiveMotorControlLab.github.io/CellSeg3D](https://adaptivemotorcontrollab.github.io/CellSeg3D/welcome.html)
2826

2927
📚 For additional examples and how to reproduce our paper figures, see: [https://github.com/C-Achard/cellseg3d-figures](https://github.com/C-Achard/cellseg3d-figures)
3028

@@ -38,7 +36,7 @@ To use the plugin, please run:
3836
```
3937
napari
4038
```
41-
Then go into `Plugins > napari_cellseg3d`, and choose which tool to use.
39+
Then go into `Plugins > napari_cellseg3d`, and choose which tool to use.
4240

4341
- **Review (label)**: This module allows you to review your labels, from predictions or manual labeling, and correct them if needed. It then saves the status of each file in a csv, for easier monitoring.
4442
- **Inference**: This module allows you to use pre-trained segmentation algorithms on volumes to automatically label cells and compute statistics.
@@ -64,7 +62,11 @@ F1-score is computed from the Intersection over Union (IoU) with ground truth la
6462

6563
## News
6664

67-
**New version: v0.2.2**
65+
### **CellSeg3D now published at eLife**
66+
67+
Read the [article here !](https://elifesciences.org/articles/99848)
68+
69+
### **New version: v0.2.2**
6870

6971
- v0.2.2:
7072
- Updated the Colab Notebooks for training and inference
@@ -96,21 +98,22 @@ Previous additions:
9698
- Many small improvements and many bug fixes
9799

98100

99-
100-
101101
## Requirements
102102

103103
**Compatible with Python 3.8 to 3.10.**
104104
Requires **[napari]**, **[PyTorch]** and **[MONAI]**.
105105
Compatible with Windows, MacOS and Linux.
106-
Installation should not take more than 30 minutes, depending on your internet connection.
106+
Installation of the plugin itself should not take more than 30 minutes, depending on your internet connection,
107+
and whether you already have Python and a package manager installed.
107108

108109
For PyTorch, please see [the PyTorch website for installation instructions].
109110

110111
A CUDA-capable GPU is not needed but very strongly recommended, especially for training.
111112

112113
If you get errors from MONAI regarding missing readers, please see [MONAI's optional dependencies] page for instructions on getting the readers required by your images.
113114

115+
Please reach out if you have any issues with the installation, we will be happy to help!
116+
114117
### Install note for ARM64 (Silicon) Mac users
115118

116119
To avoid issues when installing on the ARM64 architecture, please follow these steps.
@@ -187,18 +190,27 @@ Distributed under the terms of the [MIT] license.
187190
## Citation
188191

189192
```
190-
@article {Achard2024,
191-
author = {Achard, Cyril and Kousi, Timokleia and Frey, Markus and Vidal, Maxime and Paychere, Yves and Hofmann, Colin and Iqbal, Asim and Hausmann, Sebastien B. and Pages, Stephane and Mathis, Mackenzie W.},
192-
title = {CellSeg3D: self-supervised 3D cell segmentation for microscopy},
193-
elocation-id = {2024.05.17.594691},
194-
year = {2024},
195-
doi = {10.1101/2024.05.17.594691},
196-
publisher = {Cold Spring Harbor Laboratory},
197-
URL = {https://www.biorxiv.org/content/early/2024/05/17/2024.05.17.594691},
198-
eprint = {https://www.biorxiv.org/content/early/2024/05/17/2024.05.17.594691.full.pdf},
199-
journal = {bioRxiv}
193+
@article {10.7554/eLife.99848,
194+
article_type = {journal},
195+
title = {CellSeg3D, Self-supervised 3D cell segmentation for fluorescence microscopy},
196+
author = {Achard, Cyril and Kousi, Timokleia and Frey, Markus and Vidal, Maxime and Paychere, Yves and Hofmann, Colin and Iqbal, Asim and Hausmann, Sebastien B and Pagès, Stéphane and Mathis, Mackenzie Weygandt},
197+
editor = {Cardona, Albert},
198+
volume = 13,
199+
year = 2025,
200+
month = {jun},
201+
pub_date = {2025-06-24},
202+
pages = {RP99848},
203+
citation = {eLife 2025;13:RP99848},
204+
doi = {10.7554/eLife.99848},
205+
url = {https://doi.org/10.7554/eLife.99848},
206+
abstract = {Understanding the complex three-dimensional structure of cells is crucial across many disciplines in biology and especially in neuroscience. Here, we introduce a set of models including a 3D transformer (SwinUNetR) and a novel 3D self-supervised learning method (WNet3D) designed to address the inherent complexity of generating 3D ground truth data and quantifying nuclei in 3D volumes. We developed a Python package called CellSeg3D that provides access to these models in Jupyter Notebooks and in a napari GUI plugin. Recognizing the scarcity of high-quality 3D ground truth data, we created a fully human-annotated mesoSPIM dataset to advance evaluation and benchmarking in the field. To assess model performance, we benchmarked our approach across four diverse datasets: the newly developed mesoSPIM dataset, a 3D platynereis-ISH-Nuclei confocal dataset, a separate 3D Platynereis-Nuclei light-sheet dataset, and a challenging and densely packed Mouse-Skull-Nuclei confocal dataset. We demonstrate that our self-supervised model, WNet3D – trained without any ground truth labels – achieves performance on par with state-of-the-art supervised methods, paving the way for broader applications in label-scarce biological contexts.},
207+
keywords = {self-supervised learning, artificial intelligence, neuroscience, mesoSPIM, confocal microscopy, platynereis},
208+
journal = {eLife},
209+
issn = {2050-084X},
210+
publisher = {eLife Sciences Publications, Ltd},
200211
}
201212
```
213+
202214
## Acknowledgements
203215

204216
This plugin was developed by originally Cyril Achard, Maxime Vidal, Mackenzie Mathis.

0 commit comments

Comments
 (0)