You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
📚 Documentation is available at [https://AdaptiveMotorControlLab.github.io/CellSeg3D](https://adaptivemotorcontrollab.github.io/CellSeg3D/welcome.html)
28
26
29
27
📚 For additional examples and how to reproduce our paper figures, see: [https://github.com/C-Achard/cellseg3d-figures](https://github.com/C-Achard/cellseg3d-figures)
30
28
@@ -38,7 +36,7 @@ To use the plugin, please run:
38
36
```
39
37
napari
40
38
```
41
-
Then go into `Plugins > napari_cellseg3d`, and choose which tool to use.
39
+
Then go into `Plugins > napari_cellseg3d`, and choose which tool to use.
42
40
43
41
-**Review (label)**: This module allows you to review your labels, from predictions or manual labeling, and correct them if needed. It then saves the status of each file in a csv, for easier monitoring.
44
42
-**Inference**: This module allows you to use pre-trained segmentation algorithms on volumes to automatically label cells and compute statistics.
@@ -64,7 +62,11 @@ F1-score is computed from the Intersection over Union (IoU) with ground truth la
64
62
65
63
## News
66
64
67
-
**New version: v0.2.2**
65
+
### **CellSeg3D now published at eLife**
66
+
67
+
Read the [article here !](https://elifesciences.org/articles/99848)
68
+
69
+
### **New version: v0.2.2**
68
70
69
71
- v0.2.2:
70
72
- Updated the Colab Notebooks for training and inference
@@ -96,21 +98,22 @@ Previous additions:
96
98
- Many small improvements and many bug fixes
97
99
98
100
99
-
100
-
101
101
## Requirements
102
102
103
103
**Compatible with Python 3.8 to 3.10.**
104
104
Requires **[napari]**, **[PyTorch]** and **[MONAI]**.
105
105
Compatible with Windows, MacOS and Linux.
106
-
Installation should not take more than 30 minutes, depending on your internet connection.
106
+
Installation of the plugin itself should not take more than 30 minutes, depending on your internet connection,
107
+
and whether you already have Python and a package manager installed.
107
108
108
109
For PyTorch, please see [the PyTorch website for installation instructions].
109
110
110
111
A CUDA-capable GPU is not needed but very strongly recommended, especially for training.
111
112
112
113
If you get errors from MONAI regarding missing readers, please see [MONAI's optional dependencies] page for instructions on getting the readers required by your images.
113
114
115
+
Please reach out if you have any issues with the installation, we will be happy to help!
116
+
114
117
### Install note for ARM64 (Silicon) Mac users
115
118
116
119
To avoid issues when installing on the ARM64 architecture, please follow these steps.
@@ -187,18 +190,27 @@ Distributed under the terms of the [MIT] license.
187
190
## Citation
188
191
189
192
```
190
-
@article {Achard2024,
191
-
author = {Achard, Cyril and Kousi, Timokleia and Frey, Markus and Vidal, Maxime and Paychere, Yves and Hofmann, Colin and Iqbal, Asim and Hausmann, Sebastien B. and Pages, Stephane and Mathis, Mackenzie W.},
192
-
title = {CellSeg3D: self-supervised 3D cell segmentation for microscopy},
title = {CellSeg3D, Self-supervised 3D cell segmentation for fluorescence microscopy},
196
+
author = {Achard, Cyril and Kousi, Timokleia and Frey, Markus and Vidal, Maxime and Paychere, Yves and Hofmann, Colin and Iqbal, Asim and Hausmann, Sebastien B and Pagès, Stéphane and Mathis, Mackenzie Weygandt},
197
+
editor = {Cardona, Albert},
198
+
volume = 13,
199
+
year = 2025,
200
+
month = {jun},
201
+
pub_date = {2025-06-24},
202
+
pages = {RP99848},
203
+
citation = {eLife 2025;13:RP99848},
204
+
doi = {10.7554/eLife.99848},
205
+
url = {https://doi.org/10.7554/eLife.99848},
206
+
abstract = {Understanding the complex three-dimensional structure of cells is crucial across many disciplines in biology and especially in neuroscience. Here, we introduce a set of models including a 3D transformer (SwinUNetR) and a novel 3D self-supervised learning method (WNet3D) designed to address the inherent complexity of generating 3D ground truth data and quantifying nuclei in 3D volumes. We developed a Python package called CellSeg3D that provides access to these models in Jupyter Notebooks and in a napari GUI plugin. Recognizing the scarcity of high-quality 3D ground truth data, we created a fully human-annotated mesoSPIM dataset to advance evaluation and benchmarking in the field. To assess model performance, we benchmarked our approach across four diverse datasets: the newly developed mesoSPIM dataset, a 3D platynereis-ISH-Nuclei confocal dataset, a separate 3D Platynereis-Nuclei light-sheet dataset, and a challenging and densely packed Mouse-Skull-Nuclei confocal dataset. We demonstrate that our self-supervised model, WNet3D – trained without any ground truth labels – achieves performance on par with state-of-the-art supervised methods, paving the way for broader applications in label-scarce biological contexts.},
0 commit comments