Skip to content

Commit 9aa64a5

Browse files
Merge pull request #327 from computational-cell-analytics/dev
Changes for new release
2 parents e9f7689 + 102c4a4 commit 9aa64a5

File tree

131 files changed

+3933
-1769
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

131 files changed

+3933
-1769
lines changed

.github/workflows/test.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ jobs:
1919
strategy:
2020
fail-fast: false
2121
matrix:
22-
os: [ubuntu-latest]
22+
os: [ubuntu-latest, windows-latest, macos-latest]
2323
python-version: ["3.10"]
2424

2525
steps:

development/benchmark.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -180,7 +180,7 @@ def main():
180180
args = parser.parse_args()
181181

182182
model_type = args.model_type
183-
device = util._get_device(args.device)
183+
device = util.get_device(args.device)
184184
print("Running benchmarks for", model_type)
185185
print("with device:", device)
186186

development/seg_with_decoder.py

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
import imageio.v3 as imageio
2+
import napari
3+
4+
from micro_sam.instance_segmentation import (
5+
load_instance_segmentation_with_decoder_from_checkpoint, mask_data_to_segmentation
6+
)
7+
from micro_sam.util import precompute_image_embeddings
8+
9+
checkpoint = "./for_decoder/best.pt"
10+
segmenter = load_instance_segmentation_with_decoder_from_checkpoint(checkpoint, model_type="vit_b")
11+
12+
image_path = "/home/pape/Work/data/incu_cyte/livecell/images/livecell_train_val_images/A172_Phase_A7_1_02d00h00m_1.tif"
13+
image = imageio.imread(image_path)
14+
15+
embedding_path = "./for_decoder/A172_Phase_A7_1_02d00h00m_1.zarr"
16+
image_embeddings = precompute_image_embeddings(
17+
segmenter._predictor, image, embedding_path,
18+
)
19+
# image_embeddings = None
20+
21+
print("Start segmentation ...")
22+
segmenter.initialize(image, image_embeddings)
23+
masks = segmenter.generate(output_mode="binary_mask")
24+
segmentation = mask_data_to_segmentation(masks, with_background=True)
25+
print("Segmentation done")
26+
27+
v = napari.Viewer()
28+
v.add_image(image)
29+
# v.add_image(segmenter._foreground)
30+
v.add_labels(segmentation)
31+
napari.run()

doc/finetuned_models.md

Lines changed: 26 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,14 @@
11
# Finetuned models
22

3-
We provide models that were finetuned on microscopy data using `micro_sam.training`. They are hosted on zenodo. We currently offer the following models:
3+
In addition to the original Segment anything models, we provide models that finetuned on microscopy data using the functionality from `micro_sam.training`.
4+
The models are hosted on zenodo. We currently offer the following models:
45
- `vit_h`: Default Segment Anything model with vit-h backbone.
56
- `vit_l`: Default Segment Anything model with vit-l backbone.
67
- `vit_b`: Default Segment Anything model with vit-b backbone.
7-
- `vit_h_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-h backbone.
8+
- `vit_t`: Segment Anything model with vit-tiny backbone. From the [mobile sam publication](https://arxiv.org/abs/2306.14289).
89
- `vit_b_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-b backbone.
9-
- `vit_h_em`: Finetuned Segment Anything model for neurites and cells in electron microscopy data with vit-h backbone.
10-
- `vit_b_em`: Finetuned Segment Anything model for neurites and cells in electron microscopy data with vit-b backbone.
10+
- `vit_b_em_organelles`: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-b backbone.
11+
- `vit_b_em_boundaries`: Finetuned Segment Anything model for neurites and cells in electron microscopy data with vit-b backbone.
1112

1213
See the two figures below of the improvements through the finetuned model for LM and EM data.
1314

@@ -20,17 +21,32 @@ You can select which of the models is used in the annotation tools by selecting
2021
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/model-type-selector.png" width="256">
2122

2223
To use a specific model in the python library you need to pass the corresponding name as value to the `model_type` parameter exposed by all relevant functions.
23-
See for example the [2d annotator example](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/annotator_2d.py#L62) where `use_finetuned_model` can be set to `True` to use the `vit_h_lm` model.
24+
See for example the [2d annotator example](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/annotator_2d.py#L62) where `use_finetuned_model` can be set to `True` to use the `vit_b_lm` model.
25+
26+
Note that we are still working on improving these models and may update them from time to time. All older models will stay available for download on zenodo, see [model sources](#model-sources) below
27+
2428

2529
## Which model should I choose?
2630

2731
As a rule of thumb:
28-
- Use the `_lm` models for segmenting cells or nuclei in light microscopy.
29-
- Use the `_em` models for segmenting cells or neurites in electron microscopy.
30-
- Note that this model does not work well for segmenting mitochondria or other organelles because it is biased towards segmenting the full cell / cellular compartment.
31-
- For other cases use the default models.
32+
- Use the `vit_b_lm` model for segmenting cells or nuclei in light microscopy.
33+
- Use the `vit_b_em_organelles` models for segmenting mitochondria, nuclei or other organelles in electron microscopy.
34+
- Use the `vit_b_em_boundaries` models for segmenting cells or neurites in electron microscopy.
35+
- For other use-cases use one of the default models.
3236

3337
See also the figures above for examples where the finetuned models work better than the vanilla models.
3438
Currently the model `vit_h` is used by default.
3539

36-
We are working on releasing more fine-tuned models, in particular for mitochondria and other organelles in EM.
40+
We are working on further improving these models and adding new models for other biomedical imaging domains.
41+
42+
43+
## Model Sources
44+
45+
Here is an overview of all finetuned models we have released to zenodo so far:
46+
- [vit_b_em_boundaries](https://zenodo.org/records/10524894): for segmenting compartments delineated by boundaries such as cells or neurites in EM.
47+
- [vit_b_em_organelles](https://zenodo.org/records/10524828): for segmenting mitochondria, nuclei or other organelles in EM.
48+
- [vit_b_lm](https://zenodo.org/records/10524791): for segmenting cells and nuclei in LM.
49+
- [vit_h_em](https://zenodo.org/records/8250291): this model is outdated.
50+
- [vit_h_lm](https://zenodo.org/records/8250299): this model is outdated.
51+
52+
Some of these models contain multiple versions.

doc/images/model-type-selector.png

68 KB
Loading

environment_cpu.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,13 @@ dependencies:
88
- napari
99
- pip
1010
- pooch
11+
- python-xxhash
1112
- python-elf >=0.4.8
1213
- pytorch
1314
- segment-anything
1415
- torchvision
15-
- torch_em >=0.5.1
16+
- torch_em >=0.6.0
1617
- tqdm
1718
- timm
1819
- pip:
1920
- git+https://github.com/ChaoningZhang/MobileSAM.git
20-
# - git+https://github.com/facebookresearch/segment-anything.git

environment_gpu.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,14 +8,14 @@ dependencies:
88
- napari
99
- pip
1010
- pooch
11+
- python-xxhash
1112
- python-elf >=0.4.8
1213
- pytorch
1314
- pytorch-cuda>=11.7 # you may need to update the cuda version to match your system
1415
- segment-anything
1516
- torchvision
16-
- torch_em >=0.5.1
17+
- torch_em >=0.6.0
1718
- tqdm
1819
- timm
1920
- pip:
2021
- git+https://github.com/ChaoningZhang/MobileSAM.git
21-
# - git+https://github.com/facebookresearch/segment-anything.git

examples/annotator_2d.py

Lines changed: 23 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,28 @@
1+
import os
2+
13
import imageio.v3 as imageio
24
from micro_sam.sam_annotator import annotator_2d
35
from micro_sam.sample_data import fetch_hela_2d_example_data, fetch_livecell_example_data, fetch_wholeslide_example_data
6+
from micro_sam.util import get_cache_directory
7+
8+
DATA_CACHE = os.path.join(get_cache_directory(), "sample_data")
9+
EMBEDDING_CACHE = os.path.join(get_cache_directory(), "embeddings")
10+
os.makedirs(EMBEDDING_CACHE, exist_ok=True)
411

512

613
def livecell_annotator(use_finetuned_model):
714
"""Run the 2d annotator for an example image from the LiveCELL dataset.
815
916
See https://doi.org/10.1038/s41592-021-01249-6 for details on the data.
1017
"""
11-
example_data = fetch_livecell_example_data("./data")
18+
example_data = fetch_livecell_example_data(DATA_CACHE)
1219
image = imageio.imread(example_data)
1320

1421
if use_finetuned_model:
15-
embedding_path = "./embeddings/embeddings-livecell-vit_h_lm.zarr"
16-
model_type = "vit_h_lm"
22+
embedding_path = os.path.join(EMBEDDING_CACHE, "embeddings-livecell-vit_b_lm.zarr")
23+
model_type = "vit_b_lm"
1724
else:
18-
embedding_path = "./embeddings/embeddings-livecell.zarr"
25+
embedding_path = os.path.join(EMBEDDING_CACHE, "embeddings-livecell.zarr")
1926
model_type = "vit_h"
2027

2128
annotator_2d(image, embedding_path, show_embeddings=False, model_type=model_type)
@@ -24,14 +31,14 @@ def livecell_annotator(use_finetuned_model):
2431
def hela_2d_annotator(use_finetuned_model):
2532
"""Run the 2d annotator for an example image form the cell tracking challenge HeLa 2d dataset.
2633
"""
27-
example_data = fetch_hela_2d_example_data("./data")
34+
example_data = fetch_hela_2d_example_data(DATA_CACHE)
2835
image = imageio.imread(example_data)
2936

3037
if use_finetuned_model:
31-
embedding_path = "./embeddings/embeddings-hela2d-vit_h_lm.zarr"
32-
model_type = "vit_h_lm"
38+
embedding_path = os.path.join(EMBEDDING_CACHE, "embeddings-hela2d-vit_b_lm.zarr")
39+
model_type = "vit_b_lm"
3340
else:
34-
embedding_path = "./embeddings/embeddings-hela2d.zarr"
41+
embedding_path = os.path.join(EMBEDDING_CACHE, "embeddings-hela2d.zarr")
3542
model_type = "vit_h"
3643

3744
annotator_2d(image, embedding_path, show_embeddings=False, model_type=model_type, precompute_amg_state=True)
@@ -43,29 +50,28 @@ def wholeslide_annotator(use_finetuned_model):
4350
4451
See https://neurips22-cellseg.grand-challenge.org/ for details on the data.
4552
"""
46-
example_data = fetch_wholeslide_example_data("./data")
53+
example_data = fetch_wholeslide_example_data(DATA_CACHE)
4754
image = imageio.imread(example_data)
4855

4956
if use_finetuned_model:
50-
embedding_path = "./embeddings/whole-slide-embeddings-vit_h_lm.zarr"
51-
model_type = "vit_h_lm"
57+
embedding_path = os.path.join(EMBEDDING_CACHE, "whole-slide-embeddings-vit_b_lm.zarr")
58+
model_type = "vit_b_lm"
5259
else:
53-
embedding_path = "./embeddings/whole-slide-embeddings.zarr"
60+
embedding_path = os.path.join(EMBEDDING_CACHE, "whole-slide-embeddings.zarr")
5461
model_type = "vit_h"
5562

5663
annotator_2d(image, embedding_path, tile_shape=(1024, 1024), halo=(256, 256), model_type=model_type)
5764

5865

5966
def main():
60-
# whether to use the fine-tuned SAM model
61-
# this feature is still experimental!
62-
use_finetuned_model = False
67+
# Whether to use the fine-tuned SAM model for light microscopy data.
68+
use_finetuned_model = True
6369

6470
# 2d annotator for livecell data
65-
# livecell_annotator(use_finetuned_model)
71+
livecell_annotator(use_finetuned_model)
6672

6773
# 2d annotator for cell tracking challenge hela data
68-
hela_2d_annotator(use_finetuned_model)
74+
# hela_2d_annotator(use_finetuned_model)
6975

7076
# 2d annotator for a whole slide image
7177
# wholeslide_annotator(use_finetuned_model)

examples/annotator_3d.py

Lines changed: 25 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,33 +1,45 @@
1+
import os
2+
13
from elf.io import open_file
24
from micro_sam.sam_annotator import annotator_3d
35
from micro_sam.sample_data import fetch_3d_example_data
6+
from micro_sam.util import get_cache_directory
7+
8+
DATA_CACHE = os.path.join(get_cache_directory(), "sample_data")
9+
EMBEDDING_CACHE = os.path.join(get_cache_directory(), "embeddings")
10+
os.makedirs(EMBEDDING_CACHE, exist_ok=True)
411

512

6-
def em_3d_annotator(use_finetuned_model):
13+
def em_3d_annotator(finetuned_model):
714
"""Run the 3d annotator for an example EM volume."""
815
# download the example data
9-
example_data = fetch_3d_example_data("./data")
16+
example_data = fetch_3d_example_data(DATA_CACHE)
1017
# load the example data (load the sequence of tif files as 3d volume)
1118
with open_file(example_data) as f:
1219
raw = f["*.png"][:]
1320

14-
if use_finetuned_model:
15-
embedding_path = "./embeddings/embeddings-lucchi-vit_h_em.zarr"
16-
model_type = "vit_h_em"
17-
else:
18-
embedding_path = "./embeddings/embeddings-lucchi.zarr"
21+
if not finetuned_model:
22+
embedding_path = os.path.join(EMBEDDING_CACHE, "embeddings-lucchi.zarr")
1923
model_type = "vit_h"
24+
else:
25+
assert finetuned_model in ("organelles", "boundaries")
26+
embedding_path = os.path.join(EMBEDDING_CACHE, f"embeddings-lucchi-vit_b_em_{finetuned_model}.zarr")
27+
model_type = f"vit_b_em_{finetuned_model}"
28+
print(embedding_path)
2029

2130
# start the annotator, cache the embeddings
22-
annotator_3d(raw, embedding_path, model_type=model_type, show_embeddings=False)
31+
annotator_3d(raw, embedding_path, model_type=model_type)
2332

2433

2534
def main():
26-
# whether to use the fine-tuned SAM model
27-
# this feature is still experimental!
28-
use_finetuned_model = False
29-
30-
em_3d_annotator(use_finetuned_model)
35+
# Whether to use the fine-tuned SAM model for mitochondria (organelles) or boundaries.
36+
# valid choices are:
37+
# - None / False (will use the vanilla model)
38+
# - "organelles": will use the model for mitochondria and other organelles
39+
# - "boundaries": will use the model for boundary based structures
40+
finetuned_model = "boundaries"
41+
42+
em_3d_annotator(finetuned_model)
3143

3244

3345
if __name__ == "__main__":

examples/annotator_tracking.py

Lines changed: 14 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,32 +1,38 @@
1+
import os
2+
13
from elf.io import open_file
24
from micro_sam.sam_annotator import annotator_tracking
35
from micro_sam.sample_data import fetch_tracking_example_data
6+
from micro_sam.util import get_cache_directory
7+
8+
DATA_CACHE = os.path.join(get_cache_directory(), "sample_data")
9+
EMBEDDING_CACHE = os.path.join(get_cache_directory(), "embeddings")
10+
os.makedirs(EMBEDDING_CACHE, exist_ok=True)
411

512

613
def track_ctc_data(use_finetuned_model):
714
"""Run interactive tracking for data from the cell tracking challenge.
815
"""
916
# download the example data
10-
example_data = fetch_tracking_example_data("./data")
17+
example_data = fetch_tracking_example_data(DATA_CACHE)
1118
# load the example data (load the sequence of tif files as timeseries)
1219
with open_file(example_data, mode="r") as f:
1320
timeseries = f["*.tif"]
1421

1522
if use_finetuned_model:
16-
embedding_path = "./embeddings/embeddings-ctc-vit_h_lm.zarr"
17-
model_type = "vit_h_lm"
23+
embedding_path = os.path.join(EMBEDDING_CACHE, "embeddings-ctc-vit_b_lm.zarr")
24+
model_type = "vit_b_lm"
1825
else:
19-
embedding_path = "./embeddings/embeddings-ctc.zarr"
26+
embedding_path = os.path.join(EMBEDDING_CACHE, "embeddings-ctc.zarr")
2027
model_type = "vit_h"
2128

2229
# start the annotator with cached embeddings
23-
annotator_tracking(timeseries, embedding_path=embedding_path, show_embeddings=False, model_type=model_type)
30+
annotator_tracking(timeseries, embedding_path=embedding_path, model_type=model_type)
2431

2532

2633
def main():
27-
# whether to use the fine-tuned SAM model
28-
# this feature is still experimental!
29-
use_finetuned_model = False
34+
# Whether to use the fine-tuned SAM model.
35+
use_finetuned_model = True
3036
track_ctc_data(use_finetuned_model)
3137

3238

0 commit comments

Comments
 (0)