You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/band.md
+14-11Lines changed: 14 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,24 +1,27 @@
1
1
# Using `micro_sam` on BAND
2
2
3
-
BAND is a service offered by EMBL Heidelberg that gives access to a virtual desktop for image analysis tasks. It is free to use and `micro_sam` is installed there.
3
+
BAND is a service offered by EMBL Heidelberg under the "The German Network for Bioinformatics Infrastructure" (de.NBI) that gives access to a virtual desktop for image analysis tasks. It is free to use and `micro_sam` is installed there.
4
4
In order to use BAND and start `micro_sam` on it follow these steps:
5
5
6
6
## Start BAND
7
-
- Go to https://band.embl.de/ and click **Login**. If you have not used BAND before you will need to register for BAND. Currently you can only sign up via a Google account.
7
+
- Go to https://bandv1.denbi.uni-tuebingen.de/ and click **Login**. If you have not used BAND before you will need to register for BAND. Currently you can only sign up via a Google account. NOTE: It takes a couple of seconds for the "Launch Desktops" window to appear.
8
8
- Launch a BAND desktop with sufficient resources. It's particularly important to select a GPU. The settings from the image below are a good choice.
9
9
- Go to the desktop by clicking **GO TO DESKTOP** in the **Running Desktops** menu. See also the screenshot below.
- This will open the napari GUI, where you can select the images and annotator tools you want to use (see screenshot). NOTE: this may take a few minutes.
- For testing if the tool works, it's best to use the **Annotator 2d** first.
21
+
- You can find an example image to use by selection `File` -> `Open Sample` -> `Segment Anything for Microscopy` -> `HeLa 2d example data` (see screenshot)
This is a [Segment Anything](https://segment-anything.com/) model that was specialized for segmenting mitochondria and nuclei in electron microscopy with [micro_sam](https://github.com/computational-cell-analytics/micro-sam).
4
+
This model uses a %s vision transformer as image encoder.
5
+
6
+
Segment Anything is a model for interactive and automatic instance segmentation.
7
+
We improve it for electron microscopy by finetuning on a large and diverse microscopy dataset.
8
+
It should perform well for segmenting mitochondria and nuclei in electron microscopy. It can also work well for other organelles, but was not explicitly trained for this purpose. You may get better results for other organelles (e.g. ER or Golgi) with the default Segment Anything models.
9
+
10
+
See [the dataset overview](https://github.com/computational-cell-analytics/micro-sam/blob/master/doc/datasets/em_organelles_v%i.md) for further informations on the training data and the [micro_sam documentation](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html) for details on how to use the model for interactive and automatic segmentation.
11
+
12
+
NOTE: The model's automatic instance segmentation quality has improved as the latest version updates the segmentation decoder architecture by replacing transposed convolutions with upsampling.
13
+
14
+
15
+
## Validation
16
+
17
+
The easiest way to validate the model is to visually check the segmentation quality for your data.
18
+
If you have annotations you can use for validation you can also quantitative validation, see [here for details](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#9-how-can-i-evaluate-a-model-i-have-finetuned).
19
+
Please note that the required quality for segmentation always depends on the analysis task you want to solve.
This is a [Segment Anything](https://segment-anything.com/) model that was specialized for light microscopy with [micro_sam](https://github.com/computational-cell-analytics/micro-sam).
4
+
This model uses a %s vision transformer as image encoder.
5
+
6
+
Segment Anything is a model for interactive and automatic instance segmentation.
7
+
We improve it for light microscopy by finetuning on a large and diverse microscopy dataset.
8
+
It should perform well for cell and nucleus segmentation in fluorescent, label-free and other light microscopy datasets.
9
+
10
+
See [the dataset overview](https://github.com/computational-cell-analytics/micro-sam/blob/master/doc/datasets/lm_v%i.md) for further informations on the training data and the [micro_sam documentation](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html) for details on how to use the model for interactive and automatic segmentation.
11
+
12
+
NOTE: The model's automatic instance segmentation quality has improved as the latest version updates the segmentation decoder architecture by replacing transposed convolutions with upsampling.
13
+
14
+
15
+
## Validation
16
+
17
+
The easiest way to validate the model is to visually check the segmentation quality for your data.
18
+
If you have annotations you can use for validation you can also quantitative validation, see [here for details](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#9-how-can-i-evaluate-a-model-i-have-finetuned).
19
+
Please note that the required quality for segmentation always depends on the analysis task you want to solve.
The `EM Organelle v4` model was trained on three different electron microscopy datasets with segmentation annotations for mitochondria and nuclei:
4
+
5
+
1.[MitoEM](https://mitoem.grand-challenge.org/): containing segmentation annotations for mitochondria in volume EM of human and rat cortex.
6
+
2.[MitoLab](https://www.ebi.ac.uk/empiar/EMPIAR-11037/): containing segmentation annotations for mitochondria in different EM modalities.
7
+
3.[Platynereis (Nuclei)](https://zenodo.org/records/3675220): contining segmentation annotations for nuclei in a blockface EM volume of *P. dumerilii*.
The `LM Generalist v4` model was trained on 14 different light microscopy datasets with segmentation annotations for cells and nuclei:
4
+
5
+
1.[LIVECell](https://sartorius-research.github.io/LIVECell/): containing cell segmentation annotations for phase-contrast microscopy.
6
+
2.[DeepBacs](https://github.com/HenriquesLab/DeepBacs): containing segmentation annotations for bacterial cells in different label-free microscopy modalities.
7
+
3.[TissueNet](https://datasets.deepcell.org/): containing cell segmentation annotations in tissues imaged with fluorescence light microscopy.
8
+
4.[PlantSeg (Root)](https://osf.io/2rszy/): containing cell segmentation annotations in plant roots imaged with fluorescence lightsheet microscopy.
9
+
5.[NeurIPS CellSeg](https://neurips22-cellseg.grand-challenge.org/): containg cell segmentation annotations in phase-contrast, brightfield, DIC and fluorescence microscopy.
10
+
6.[CTC (Cell Tracking Challenge)](https://celltrackingchallenge.net/2d-datasets/): containing cell segmentation annotations in different label-free and fluorescence microscopy settings. We make use of the following CTC datasets: `BF-C2DL-HSC`, `BF-C2DL-MuSC`, `DIC-C2DH-HeLa`, `Fluo-C2DL-Huh7`, `Fluo-C2DL-MSC`, `Fluo-N2DH-SIM+`, `PhC-C2DH-U373`, `PhC-C2DL-PSC"`]
11
+
7.[DSB Nucleus Segmentation](https://www.kaggle.com/c/data-science-bowl-2018): containing nucleus segmentation annotations in fluorescence microscopy. We make use of [this subset](https://github.com/stardist/stardist/releases/download/0.1.0/dsb2018.zip) of the data.
12
+
8.[EmbedSeg](https://github.com/juglab/EmbedSeg): containing cell and nuclei annotations in fluorescence microcsopy.
13
+
9.[YeaZ](https://www.epfl.ch/labs/lpbs/data-and-software): containing segmentation annotations for yeast cells in phase contrast and brightfield microscopy.
14
+
10.[CVZ Fluo](https://www.synapse.org/Synapse:syn27624812/): containing cell and nuclei annotations in fluorescence microsocopy.
15
+
11.[DynamicNuclearNet](https://datasets.deepcell.org/): containing nuclei annotations in fluorescence microscopy.
16
+
12.[CellPose](https://www.cellpose.org/): containing cell annotations in fluorescence microscopy.
17
+
13.[OmniPose](https://osf.io/xmury/): containing segmentation annotations for bacterial cells in phase-contrast and fluorescence microscopy, and worms in brightfield microscopy.
18
+
14.[OrgaSegment](https://zenodo.org/records/10278229): containing segmentation annotations for organoids in brightfield microscopy.
0 commit comments