You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+23-10Lines changed: 23 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -64,6 +64,10 @@ F1-score is computed from the Intersection over Union (IoU) with ground truth la
64
64
65
65
## News
66
66
67
+
**CellSeg3D now published at eLife !**
68
+
69
+
Read the [article here !](https://elifesciences.org/articles/99848)
70
+
67
71
**New version: v0.2.2**
68
72
69
73
- v0.2.2:
@@ -187,18 +191,27 @@ Distributed under the terms of the [MIT] license.
187
191
## Citation
188
192
189
193
```
190
-
@article {Achard2024,
191
-
author = {Achard, Cyril and Kousi, Timokleia and Frey, Markus and Vidal, Maxime and Paychere, Yves and Hofmann, Colin and Iqbal, Asim and Hausmann, Sebastien B. and Pages, Stephane and Mathis, Mackenzie W.},
192
-
title = {CellSeg3D: self-supervised 3D cell segmentation for microscopy},
title = {CellSeg3D, Self-supervised 3D cell segmentation for fluorescence microscopy},
197
+
author = {Achard, Cyril and Kousi, Timokleia and Frey, Markus and Vidal, Maxime and Paychere, Yves and Hofmann, Colin and Iqbal, Asim and Hausmann, Sebastien B and Pagès, Stéphane and Mathis, Mackenzie Weygandt},
198
+
editor = {Cardona, Albert},
199
+
volume = 13,
200
+
year = 2025,
201
+
month = {jun},
202
+
pub_date = {2025-06-24},
203
+
pages = {RP99848},
204
+
citation = {eLife 2025;13:RP99848},
205
+
doi = {10.7554/eLife.99848},
206
+
url = {https://doi.org/10.7554/eLife.99848},
207
+
abstract = {Understanding the complex three-dimensional structure of cells is crucial across many disciplines in biology and especially in neuroscience. Here, we introduce a set of models including a 3D transformer (SwinUNetR) and a novel 3D self-supervised learning method (WNet3D) designed to address the inherent complexity of generating 3D ground truth data and quantifying nuclei in 3D volumes. We developed a Python package called CellSeg3D that provides access to these models in Jupyter Notebooks and in a napari GUI plugin. Recognizing the scarcity of high-quality 3D ground truth data, we created a fully human-annotated mesoSPIM dataset to advance evaluation and benchmarking in the field. To assess model performance, we benchmarked our approach across four diverse datasets: the newly developed mesoSPIM dataset, a 3D platynereis-ISH-Nuclei confocal dataset, a separate 3D Platynereis-Nuclei light-sheet dataset, and a challenging and densely packed Mouse-Skull-Nuclei confocal dataset. We demonstrate that our self-supervised model, WNet3D – trained without any ground truth labels – achieves performance on par with state-of-the-art supervised methods, paving the way for broader applications in label-scarce biological contexts.},
0 commit comments