You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Start preparing release for new models
* Add dataset and bioimageio docs and bump model version
* Bump diplomatic bug and noisy ox checksums
* Bump ideaslitc-rat and humorous-crab checksums
* Bump faithful-chicken and greedy-whale checksums
* Bump diplomatic-bug model version
* Bump more models
* Bump all models and add model download in info cli
* Update model download function
* Revert vit_t models to fix CI
* Update doc/bioimageio/em_organelles_v4.md
---------
Co-authored-by: Constantin Pape <[email protected]>
This is a [Segment Anything](https://segment-anything.com/) model that was specialized for segmenting mitochondria and nuclei in electron microscopy with [micro_sam](https://github.com/computational-cell-analytics/micro-sam).
4
+
This model uses a %s vision transformer as image encoder.
5
+
6
+
Segment Anything is a model for interactive and automatic instance segmentation.
7
+
We improve it for electron microscopy by finetuning on a large and diverse microscopy dataset.
8
+
It should perform well for segmenting mitochondria and nuclei in electron microscopy. It can also work well for other organelles, but was not explicitly trained for this purpose. You may get better results for other organelles (e.g. ER or Golgi) with the default Segment Anything models.
9
+
10
+
See [the dataset overview](https://github.com/computational-cell-analytics/micro-sam/blob/master/doc/datasets/em_organelles_v%i.md) for further informations on the training data and the [micro_sam documentation](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html) for details on how to use the model for interactive and automatic segmentation.
11
+
12
+
NOTE: The model's automatic instance segmentation quality has improved as the latest version updates the segmentation decoder architecture by replacing transposed convolutions with upsampling.
13
+
14
+
15
+
## Validation
16
+
17
+
The easiest way to validate the model is to visually check the segmentation quality for your data.
18
+
If you have annotations you can use for validation you can also quantitative validation, see [here for details](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#9-how-can-i-evaluate-a-model-i-have-finetuned).
19
+
Please note that the required quality for segmentation always depends on the analysis task you want to solve.
This is a [Segment Anything](https://segment-anything.com/) model that was specialized for light microscopy with [micro_sam](https://github.com/computational-cell-analytics/micro-sam).
4
+
This model uses a %s vision transformer as image encoder.
5
+
6
+
Segment Anything is a model for interactive and automatic instance segmentation.
7
+
We improve it for light microscopy by finetuning on a large and diverse microscopy dataset.
8
+
It should perform well for cell and nucleus segmentation in fluorescent, label-free and other light microscopy datasets.
9
+
10
+
See [the dataset overview](https://github.com/computational-cell-analytics/micro-sam/blob/master/doc/datasets/lm_v%i.md) for further informations on the training data and the [micro_sam documentation](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html) for details on how to use the model for interactive and automatic segmentation.
11
+
12
+
NOTE: The model's automatic instance segmentation quality has improved as the latest version updates the segmentation decoder architecture by replacing transposed convolutions with upsampling.
13
+
14
+
15
+
## Validation
16
+
17
+
The easiest way to validate the model is to visually check the segmentation quality for your data.
18
+
If you have annotations you can use for validation you can also quantitative validation, see [here for details](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#9-how-can-i-evaluate-a-model-i-have-finetuned).
19
+
Please note that the required quality for segmentation always depends on the analysis task you want to solve.
The `EM Organelle v4` model was trained on three different electron microscopy datasets with segmentation annotations for mitochondria and nuclei:
4
+
5
+
1.[MitoEM](https://mitoem.grand-challenge.org/): containing segmentation annotations for mitochondria in volume EM of human and rat cortex.
6
+
2.[MitoLab](https://www.ebi.ac.uk/empiar/EMPIAR-11037/): containing segmentation annotations for mitochondria in different EM modalities.
7
+
3.[Platynereis (Nuclei)](https://zenodo.org/records/3675220): contining segmentation annotations for nuclei in a blockface EM volume of *P. dumerilii*.
The `LM Generalist v4` model was trained on 14 different light microscopy datasets with segmentation annotations for cells and nuclei:
4
+
5
+
1.[LIVECell](https://sartorius-research.github.io/LIVECell/): containing cell segmentation annotations for phase-contrast microscopy.
6
+
2.[DeepBacs](https://github.com/HenriquesLab/DeepBacs): containing segmentation annotations for bacterial cells in different label-free microscopy modalities.
7
+
3.[TissueNet](https://datasets.deepcell.org/): containing cell segmentation annotations in tissues imaged with fluorescence light microscopy.
8
+
4.[PlantSeg (Root)](https://osf.io/2rszy/): containing cell segmentation annotations in plant roots imaged with fluorescence lightsheet microscopy.
9
+
5.[NeurIPS CellSeg](https://neurips22-cellseg.grand-challenge.org/): containg cell segmentation annotations in phase-contrast, brightfield, DIC and fluorescence microscopy.
10
+
6.[CTC (Cell Tracking Challenge)](https://celltrackingchallenge.net/2d-datasets/): containing cell segmentation annotations in different label-free and fluorescence microscopy settings. We make use of the following CTC datasets: `BF-C2DL-HSC`, `BF-C2DL-MuSC`, `DIC-C2DH-HeLa`, `Fluo-C2DL-Huh7`, `Fluo-C2DL-MSC`, `Fluo-N2DH-SIM+`, `PhC-C2DH-U373`, `PhC-C2DL-PSC"`]
11
+
7.[DSB Nucleus Segmentation](https://www.kaggle.com/c/data-science-bowl-2018): containing nucleus segmentation annotations in fluorescence microscopy. We make use of [this subset](https://github.com/stardist/stardist/releases/download/0.1.0/dsb2018.zip) of the data.
12
+
8.[EmbedSeg](https://github.com/juglab/EmbedSeg): containing cell and nuclei annotations in fluorescence microcsopy.
13
+
9.[YeaZ](https://www.epfl.ch/labs/lpbs/data-and-software): containing segmentation annotations for yeast cells in phase contrast and brightfield microscopy.
14
+
10.[CVZ Fluo](https://www.synapse.org/Synapse:syn27624812/): containing cell and nuclei annotations in fluorescence microsocopy.
15
+
11.[DynamicNuclearNet](https://datasets.deepcell.org/): containing nuclei annotations in fluorescence microscopy.
16
+
12.[CellPose](https://www.cellpose.org/): containing cell annotations in fluorescence microscopy.
17
+
13.[OmniPose](https://osf.io/xmury/): containing segmentation annotations for bacterial cells in phase-contrast and fluorescence microscopy, and worms in brightfield microscopy.
18
+
14.[OrgaSegment](https://zenodo.org/records/10278229): containing segmentation annotations for organoids in brightfield microscopy.
0 commit comments