Skip to content

Commit ed5e48d

Browse files
authored
Update README.md (#84)
1 parent 367ee5d commit ed5e48d

File tree

1 file changed

+41
-32
lines changed

1 file changed

+41
-32
lines changed

README.md

Lines changed: 41 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -1,28 +1,30 @@
11
# CellSeg3D: self-supervised (and supervised) 3D cell segmentation
2-
<img src="https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/838605d0-9723-4e43-83cd-6dbfe4adf36b/cellseg-logo.png?format=1500w" title="cellseg3d" alt="cellseg3d logo" width="350" align="right" vspace = "80"/>
3-
4-
<a href="https://github.com/psf/black"><img alt="Code style: black" src="https://img.shields.io/badge/code%20style-black-000000.svg"></a>
2+
[![napari hub](https://img.shields.io/endpoint?url=https://api.napari-hub.org/shields/napari-cellseg3d)](https://www.napari-hub.org/plugins/napari-cellseg3d)
53
[![PyPI](https://img.shields.io/pypi/v/napari-cellseg3d.svg?color=green)](https://pypi.org/project/napari-cellseg3d)
64
[![Downloads](https://static.pepy.tech/badge/napari-cellseg3d)](https://pepy.tech/project/napari-cellseg3d)
75
[![Downloads](https://static.pepy.tech/badge/napari-cellseg3d/month)](https://pepy.tech/project/napari-cellseg3d)
86
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/AdaptiveMotorControlLab/CellSeg3D/raw/main/LICENSE)
9-
[![Python Version](https://img.shields.io/pypi/pyversions/napari-cellseg-annotator.svg?color=green)](https://python.org)
107
[![codecov](https://codecov.io/gh/AdaptiveMotorControlLab/CellSeg3D/branch/main/graph/badge.svg?token=hzUcn3XN8F)](https://codecov.io/gh/AdaptiveMotorControlLab/CellSeg3D)
11-
[![napari hub](https://img.shields.io/endpoint?url=https://api.napari-hub.org/shields/napari-cellseg3d)](https://www.napari-hub.org/plugins/napari-cellseg3d)
8+
<a href="https://github.com/psf/black"><img alt="Code style: black" src="https://img.shields.io/badge/code%20style-black-000000.svg"></a>
9+
10+
<img src="https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/838605d0-9723-4e43-83cd-6dbfe4adf36b/cellseg-logo.png?format=1500w" title="cellseg3d" alt="cellseg3d logo" width="350" align="right" vspace = "80"/>
11+
12+
13+
**A package for 3D cell segmentation with deep learning, including a napari plugin**: training, inference, and data review. In particular, this project was developed for analysis of mesoSPIM-acquired (cleared tissue + lightsheet) brain tissue datasets, but is not limited to this type of data. [Check out our preprint for more information!](https://www.biorxiv.org/content/10.1101/2024.05.17.594691v1)
1214

13-
- A napari plugin for 3D cell segmentation: training, inference, and data review. In particular, this project was developed for analysis of mesoSPIM-acquired (cleared tissue + lightsheet) datasets.
1415

1516
![demo](https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/0d16a71b-3ff2-477a-9d83-18d96cb1ce28/full_demo.gif?format=500w)
1617

18+
1719
## Installation
1820

1921
💻 See the [Installation page](https://adaptivemotorcontrollab.github.io/CellSeg3d/welcome.html) in the documentation for detailed instructions.
2022

2123
## Documentation
2224

23-
📚 A lot of documentation is available at [https://AdaptiveMotorControlLab.github.io/CellSeg3D
25+
📚 Documentation is available at [https://AdaptiveMotorControlLab.github.io/CellSeg3D
2426
](https://adaptivemotorcontrollab.github.io/CellSeg3D/welcome.html)
25-
You can also generate docs by running ``make html`` in the docs/ folder.
27+
You can also generate docs locally by running ``make html`` in the docs/ folder.
2628

2729
## Quick Start
2830

@@ -37,6 +39,24 @@ Then go into Plugins > napari-cellseg3d, and choose which tool to use.
3739
- **Train**: This module allows you to train segmentation algorithms from labeled volumes.
3840
- **Utilities**: This module allows you to perform several actions like cropping your volumes and labels dynamically, by selecting a fixed size volume and moving it around the image; fragment images into smaller cubes for training; or converting labels from instance to segmentation and the opposite.
3941

42+
## Why use CellSeg3D?
43+
44+
The strength of our approach is we can match supervised model performance with purely self-supervised learning, meaning users don't need to spend (hundreds) of hours on annotation. Here is a quick look of our key results. TL;DR see panel **f**, which shows that with minmal input data we can outperform supervised models:
45+
46+
<p align="center">
47+
<img src="https://www.biorxiv.org/content/biorxiv/early/2024/05/17/2024.05.17.594691/F1.large.jpg?format=200w" alt="Figure1" width="600"/>
48+
</p>
49+
50+
#### Performance of 3D Semantic and Instance Segmentation Models.
51+
**a:** Raw mesoSPIM whole-brain sample, volumes and corresponding ground truth labels from somatosensory (S1) and visual (V1) cortical regions.
52+
**b:** Evaluation of instance segmentation performance for several supervised models over three data subsets. F1-score is computed from the Intersection over Union (IoU) with ground truth labels, then averaged. Error bars represent 50% Confidence Intervals (CIs).
53+
**c:** View of 3D instance labels from supervised models, as noted, for visual cortex volume in b evaluation.
54+
**d:** Illustration of our WNet3D architecture showcasing the dual 3D U-Net structure with modifications (see Methods).
55+
**e:** Example 3D instance labels from WNet3D; top row is S1, bottom is V1, with artifacts removed.
56+
**f:** Semantic segmentation performance: comparison of model efficiency, indicating the volume of training data required to achieve a given performance level. Each supervised model was trained with an increasing percentage of training data (with 10, 20, 60 or 80%, left to right within each model grouping); Dice score was computed on unseen test data, over three data subsets for each training/evaluation split. Our self-supervised model (WNet3D) is also trained on a subset of the training set of images, but always without human labels. Far right: We also show performance of the pretrained WNet3D available in the plugin (far right), with and without removing artifacts in the image. See Methods for details. The central box represents the interquartile range (IQR) of values with the median as a horizontal line, the upper and lower limits the upper and lower quartiles. Whiskers extend to data points within 1.5 IQR of the quartiles.
57+
**g:** Instance segmentation performance comparison of Swin-UNetR and WNet3D (pretrained, see Methods), evaluated on unseen data across 3 data subsets, compared with a Swin-UNetR model trained using labels from the WNet3D self-supervised model. Here, WNet3D was trained on separate data, producing semantic labels that were then used to train a supervised Swin-UNetR model, still on held-out data. This supervised model was evaluated as the other models, on 3 held-out images from our dataset, unseen during training. Error bars indicate 50% CIs.
58+
59+
4060
## News
4161

4262
**New version: v0.2.0**
@@ -62,26 +82,10 @@ Previous additions:
6282

6383

6484

65-
### Install note for ARM64 (Silicon) Mac users
66-
67-
To avoid issues when installing on the ARM64 architecture, please follow these steps.
68-
69-
1) Create a new conda env using the provided conda/napari_CellSeg3D_ARM64.yml file :
70-
71-
git clone https://github.com/AdaptiveMotorControlLab/CellSeg3d.git
72-
cd CellSeg3d
73-
conda env create -f conda/CellSeg3D_ARM64.yml
74-
conda activate napari_CellSeg3D_ARM64
75-
76-
77-
2) Install a Qt backend (PySide or PyQt5)
78-
3) Launch napari, the plugin should be available in the plugins menu.
79-
80-
8185

8286
## Requirements
8387

84-
**Python 3.8 or 3.9 required.**
88+
**Compatible with Python 3.8 to 3.10.**
8589
Requires **[napari]**, **[PyTorch]** and **[MONAI]**.
8690
Compatible with Windows, MacOS and Linux.
8791
Installation should not take more than 30 minutes, depending on your internet connection.
@@ -92,17 +96,22 @@ A CUDA-capable GPU is not needed but very strongly recommended, especially for t
9296

9397
If you get errors from MONAI regarding missing readers, please see [MONAI's optional dependencies] page for instructions on getting the readers required by your images.
9498

95-
## Quick demo
99+
### Install note for ARM64 (Silicon) Mac users
100+
101+
To avoid issues when installing on the ARM64 architecture, please follow these steps.
96102

97-
After installation, you can run the plugin by running:
103+
1) Create a new conda env using the provided conda/napari_CellSeg3D_ARM64.yml file :
98104

99-
napari
105+
git clone https://github.com/AdaptiveMotorControlLab/CellSeg3d.git
106+
cd CellSeg3d
107+
conda env create -f conda/CellSeg3D_ARM64.yml
108+
conda activate napari_CellSeg3D_ARM64
109+
110+
111+
2) Install a Qt backend (PySide or PyQt5)
112+
3) Launch napari, the plugin should be available in the plugins menu.
100113

101-
and launching the plugin from the Plugins menu.
102-
You may use the test volume in the `examples` folder to test the inference and review tools.
103-
This should run in far less than five minutes on a modern computer.
104114

105-
You may also find a demo Colab notebook in the `notebooks` folder.
106115

107116
## Issues
108117

0 commit comments

Comments
 (0)