You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: workshops/i2k_2024/README.md
+20-19Lines changed: 20 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,14 +7,14 @@ This document walks you through the preparation for the upcoming I2K workshops o
7
7
## Workshop Overview
8
8
9
9
The workshop will be one hour and will be divided into three parts:
10
-
1. Short introduction ([slides](https://docs.google.com/presentation/d/1Bw0gQ9Xio0HozKVaJl9-mxJBmCsQPh-1/edit?usp=sharing&ouid=113044948772353505255&rtpof=true&sd=true), 5-10 minutes)
10
+
1. Short introduction (5-10 minutes, you can find the slides[here](https://docs.google.com/presentation/d/1Bw0gQ9Xio0HozKVaJl9-mxJBmCsQPh-1/edit?usp=sharing&ouid=113044948772353505255&rtpof=true&sd=true))
11
11
2. Using the `micro_sam` napari plugin for interactive 2D and 3D segmentation (10-15 minutes)
12
12
3. Using the plugin on your own or on example data, finetuning a custom model or an advanced application (35-40 minutes)
13
13
14
-
We will walk through how to use the `micro_sam` plugin for interactive segmentation in part 2, so that you can then try it out on your own data (or the example data that is most similar to your targeted application) in part 3.
15
-
Alternatively you can also work on model finetuning or an advanced application, such as using our python library to build your own annotation scripts, in part 3.
14
+
We will walk through how to use the `micro_sam` plugin for interactive segmentation in part 2, so that you can then apply it to your own data (or the example data that is most similar to your targeted application) in part 3.
15
+
Alternatively you can also work on model finetuning or an advanced application, such as using the `micro_sam`python library for scripting, in part 3.
16
16
17
-
**Please read the [Workshop Preparation](#workshop-preparation) section carefully and follow the relevant steps before the workshop, so that we can get started during the workshop right away.**
17
+
**Please read the [Workshop Preparation](#workshop-preparation) section carefully and follow the relevant steps before the workshop, so that we can get started right away.**
18
18
19
19
## Workshop Preparation
20
20
@@ -24,18 +24,18 @@ To prepare for the workshop please do the following:
24
24
- Decide what you want to do in the 3rd part of the workshop and follow the respective preparation steps. You have the following options:
25
25
- High-throughput annotation of cells (or other structures) in 2D images, see [high-throughput annotation](#high-throughput-image-annotation).
26
26
- 3D segmentation in light or electron mciroscopy, see [3D LM segmentation](#3d-lm-segmentation) and [3D EM segmentation](#3d-em-segmentation).
27
-
- Finetuning a SAM model, see [model finetuning](#model-finetuning).
27
+
- Finetuning SAM on custom data, see [model finetuning](#model-finetuning).
28
28
- Writing your own scripts using the `micro_sam` python library, see [scripting](#scripting-with-micro_sam).
29
29
30
30
You can do all of this on your laptop with a CPU, except for model finetuning, which requires a GPU.
31
31
We have prepared a notebook that runs on cloud resources with a GPU for this.
32
32
33
-
If you want to learn more about the `micro_sam` napari plugin or python library you can check out the [documentation](https://computational-cell-analytics.github.io/micro-sam/)or our [tutorial videos](https://youtube.com/playlist?list=PLwYZXQJ3f36GQPpKCrSbHjGiH39X4XjSO&si=3q-cIRD6KuoZFmAM).
33
+
If you want to learn more about the `micro_sam` napari plugin or python library you can check out the [documentation](https://computational-cell-analytics.github.io/micro-sam/)and our [tutorial videos](https://youtube.com/playlist?list=PLwYZXQJ3f36GQPpKCrSbHjGiH39X4XjSO&si=3q-cIRD6KuoZFmAM).
34
34
35
35
### Installation
36
36
37
-
Please make sure to install the latest version of `micro_sam`(version 1.1) before the workshop using `conda` (or `mamba`).
38
-
You can create a new environment and install `micro_sam` like this:
37
+
Please make sure to install the latest version of `micro_sam` before the workshop using `conda` (or `mamba`).
38
+
You can create a new environment and install it like this:
@@ -46,7 +46,7 @@ If you already have an installation of `micro_sam` please update it by running t
46
46
### Download Embeddings for 3D EM Segmentation
47
47
48
48
We provide a script to download the image embeddings for the 3D segmentation problem in part 2.
49
-
The image embeddings are necessary to run interactive segmentation. Computing them on the CPU can take some time for volumetric data, but we support precomputing them and have done this for this dataset so that we can start with the interactive segmentation during the workshop right away.
49
+
The image embeddings are necessary to run interactive segmentation. Computing them on the CPU can take some time for volumetric data, but we support precomputing them and have done this for this data already.
50
50
51
51
To run the script you first need to use `git` to download this repository:
Note: you can use `micro_sam` with different models: the original models from Segment Anything and models finetuned for different microscopy segmentation tasks by us.
79
-
For cell segmentation you can either use `vit_b` (the original model) or `vit_b_lm` (our model). Our `vit_b_lm` model will be better for most cell segmentation problems but there may be cases where `vit_b` is better, so it makes sense to test both before annotating your data. Please refer to [our documentation](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models) for details on our models.
79
+
For cell segmentation you can either use `vit_b` (the original model) or `vit_b_lm` (our model). Our `vit_b_lm` model will be better for most cell segmentation tasks but there may be cases where `vit_b` is better, so it makes sense to test both before annotating your data. Please refer to [our documentation](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models) for details on the models.
80
80
81
81
**If you want to bring your own data for annotation please store it in a similar format to the example data. Note that we also support tif images and that you DO NOT have to provide segmentation masks; we include them here only for reference and they are not needed for annotation with micro_sam.**
Note: you can use `micro_sam` with different models: the original models from Segment Anything and models finetuned for different microscopy segmentation tasks by us.
101
-
For cell or nucleus segmentation you can either use `vit_b` (the original model) or `vit_b_lm` (our model). Our `vit_b_lm` model will be better for most segmentation problems in light microscopy but there may be cases where `vit_b` is better, so it makes sense to test both before annotating your data. Please refer to [our documentation](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models) for details on our models.
101
+
For cell or nucleus segmentation you can either use `vit_b` (the original model) or `vit_b_lm` (our model). Our `vit_b_lm` model will be better for most segmentation problems in light microscopy but there may be cases where `vit_b` is better, so it makes sense to test both before annotating your data. Please refer to [our documentation](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models) for details on the models.
102
102
103
103
**If you want to bring your own data for annotation please store it in a similar format to the example data. You DO NOT have to provide segmentation masks; we include them here only for reference and they are not needed for annotation with micro_sam. Please also precompute the embeddings for your data, see [Precompute Embeddings](#precompute-embeddings) for details.**
104
104
105
105
### 3D EM Segmentation
106
106
107
107
You can use the [3D annotation tool](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#annotator-3d) to run interactive segmentation for cells or organelles in volume electron microscopy.
108
-
We have prepared an example dataset for the workshop that you can use. It consists of ...
108
+
We have prepared an example dataset for the workshop that you can use. It consists of a small crop from an EM volume of **Platynereis dumerilii**, from [Hernandez et al.](https://www.cell.com/cell/fulltext/S0092-8674(21)00876-X). The volume contains several cells, so you can segment the cells or cellular ultrastructure such as nuclei or mitochondria.
109
+
109
110
You can download the data with the script `download_dataset.py`:
110
111
```bash
111
112
$ python download_datasets.py -i data -d volume_em
Note: you can use `micro_sam` with different models: the original models from Segment Anything and models finetuned for different microscopy segmentation tasks by us.
124
-
For segmentation in EM you can either use `vit_b` (the original model) or `vit_b_em_organelles` (our model). Our `vit_b_lm` model will likely be better for nucleus or mitochondrium segmentation, for other structures `vit_b` will likely better, so it makes sense to test both before annotating your data. Please refer to [our documentation](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models) for details on our models.
125
+
For segmentation in EM you can either use `vit_b` (the original model) or `vit_b_em_organelles` (our model). Our `vit_b_lm` model will likely be better for nucleus or mitochondrium segmentation, for other structures `vit_b` will likely be better, so it makes sense to test both before annotating your data. Please refer to [our documentation](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#finetuned-models) for details on our models.
125
126
126
127
**If you want to bring your own data for annotation please store it in a similar format to the example data. You DO NOT have to provide segmentation masks; we include them here only for reference and they are not needed for annotation with micro_sam. Please also precompute the embeddings for your data, see [Precompute Embeddings](#precompute-embeddings) for details.**
Note: you need a GPU in order to finetune the model (finetuning on the CPU is possible but takes too long for the workshop).
139
140
We have prepared the notebook so that it can be run on [kaggle](ttps://www.kaggle.com/code/) with a GPU, which you can use for the course. If you want to use this option please make sure that you can log in there before the workshop.
140
141
141
-
**If you want to bring your own data for training please store it in a similar format to the example data. If you want to use kaggle please also upload it so that you can retrieve it within the notebook.**
142
+
**If you want to bring your own data for training please store it in a similar format to the example data. You have to bring both images and annotations (= instance segmentation masks) for training. If you want to use kaggle please also upload your data so that you can retrieve it within the notebook.**
142
143
143
144
### Scripting with micro_sam
144
145
145
146
You can also use the [micro_sam python library](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#using-the-python-library) to implement your own functionality.
146
147
For example, you could implement a script to segment cells based on prompts derived from a nucleus segmentation via [batched inference](https://computational-cell-analytics.github.io/micro-sam/micro_sam/inference.html#batched_inference).
147
-
Or a script to automatically segment data with a finetuned model using [automatic segmentation](TODO).
148
+
Or a script to automatically segment data with a finetuned model using [automatic segmentation](https://computational-cell-analytics.github.io/micro-sam/micro_sam/automatic_segmentation.html).
148
149
149
150
Feel free to contact us before the workshop if you have an idea for what you want to implement and would like to know if this is feasible and how to get started.
150
151
@@ -154,11 +155,11 @@ You can use the command line to precompute embeddings for volumetric segmentatio
154
155
Here is the example script for pre-computing the embeddings on the [3D nucleus segmentation data](#3d-lm-segmentation).
155
156
156
157
```bash
157
-
$ micro_sam.precompute_embeddings -i data/ # Filepath where inputs are stored.
158
+
$ micro_sam.precompute_embeddings -i data/nuclei_3d/images/X1.tif# Filepath where inputs are stored.
158
159
-m vit_b # You can provide name for a model of your choice (supported by 'micro-sam') (eg. 'vit_b_lm').
159
-
-e embeddings/ # Filepath where computed embeddings will be cached.
160
+
-e embeddings/vit_b/embed_x1.zarr# Filepath where computed embeddings will be stored.
160
161
```
161
162
162
-
You just need to adapt the path to the data, choose the model you want to use (`vit_b`, `vit_b_lm`, `vit_b_em_organelles`) and adapt the path where the embeddings should be saved.
163
+
You need to adapt the path to the data, choose the model you want to use (`vit_b`, `vit_b_lm`, `vit_b_em_organelles`) and adapt the path where the embeddings should be saved.
163
164
164
-
This step will take ca. 30 minutes for a volume with 200 image planes.
165
+
This step will take ca. 30 minutes for a volume with 200 image planes on a CPU.
0 commit comments