Skip to content

Commit 64968c1

Browse files
Merge branch 'master' into tracking
2 parents 47807f8 + 9943d14 commit 64968c1

18 files changed

+747
-101
lines changed

LICENSE

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
MIT License
2+
3+
Copyright (c) 2023 Computational Cell Analytics (Research Group of Prof. Dr. Constantin Pape)
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a copy
6+
of this software and associated documentation files (the "Software"), to deal
7+
in the Software without restriction, including without limitation the rights
8+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+
copies of the Software, and to permit persons to whom the Software is
10+
furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice shall be included in all
13+
copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21+
SOFTWARE.

README.md

Lines changed: 24 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,15 @@ We implement napari applications for:
88

99
**Early beta version**
1010

11-
This is an early beta version. Any feedback is welcome, but please be aware that the functionality is evolving fast and not fully tested.
11+
This is an early beta version. Any feedback is welcome, but please be aware that the functionality is under active development and that several features are not finalized or thoroughly tested yet.
12+
Once the functionality has matured we plan to release the interactive annotation applications as [napari plugins](https://napari.org/stable/plugins/index.html).
13+
1214

1315
## Functionality overview
1416

1517
TODO
18+
- quick explanation and gifs
19+
1620

1721
## Installation
1822

@@ -51,15 +55,32 @@ pip install -e .
5155

5256
## Usage
5357

58+
After the installation the three applications for interactive annotations can be started from the command line or within a python script:
59+
- **2d segmentation**: via the command `micro_sam.annotator_2d` or with the function `micro_sam.sam_annotator.annotator_2d` from python. Run `micro_sam.annotator_2d -h` or check out [examples/sam_annotator_2d](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_2d.py) for details.
60+
- **3d segmentation**: via the command `micro_sam.annotator_3d` or with the function `micro_sam.sam_annotator.annotator_3d` from python. Run `micro_sam.annotator_3d -h` or check out [examples/sam_annotator_3d](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_3d.py) for details.
61+
- **tracking**: via the command `micro_sam.annotator_tracking` or with the function `micro_sam.sam_annotator.annotator_tracking` from python. Run `micro_sam.annotator_tracking -h` or check out [examples/sam_annotator_tracking](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_tracking.py) for details.
62+
5463
TODO
64+
- show image with annotated user interface (for 3d?!)
65+
- link to videos that explain the functionality for the 3 plugins
5566

5667
### Tips & Tricks
5768

5869
TODO
70+
- speeding things up: precomputing the embeddings with a gpu, making input images smaller
71+
- correcting existing segmentaitons via `segmentation_results`
72+
- saving and loading intermediate results via segmentation results
73+
74+
### Limitations
75+
76+
TODO
77+
- automatic instance segmentation limitations
5978

6079
## Using the micro_sam library
6180

6281
TODO
82+
- link to the example image series application
83+
6384

6485
## Contributing
6586

@@ -68,8 +89,9 @@ micro_sam <- library with utility functionality for using SAM for microscopy dat
6889
/sam_annotator <- the napari plugins for annotation
6990
```
7091

92+
7193
## Citation
7294

7395
If you are using this repository in your research please cite
7496
- [SegmentAnything](https://arxiv.org/abs/2304.02643)
75-
- and our repository on [zenodo](TODO) (we are working on a full publication)
97+
- and our repository on [zenodo](TODO) (we are working on a publication)

development/.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
embeddings/
2+
*.npy

development/tracking.py

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
from glob import glob
2+
3+
import numpy as np
4+
from elf.io import open_file
5+
from micro_sam.sam_annotator import annotator_tracking
6+
7+
8+
def debug_tracking(timeseries, embedding_path):
9+
import micro_sam.util as util
10+
from micro_sam.sam_annotator.annotator_tracking import _track_from_prompts
11+
12+
predictor = util.get_sam_model()
13+
image_embeddings = util.precompute_image_embeddings(predictor, timeseries, embedding_path)
14+
15+
# seg = np.zeros(timeseries.shape, dtype="uint32")
16+
seg = np.load("./seg.npy")
17+
assert seg.shape == timeseries.shape
18+
slices = np.array([0])
19+
stop_upper = False
20+
21+
_track_from_prompts(seg, predictor, slices, image_embeddings, stop_upper, threshold=0.5, projection="bounding_box")
22+
23+
24+
def load_data():
25+
pattern = "/home/pape/Work/data/incu_cyte/carmello/videos/MiaPaCa_flat_B3-3_registered/image-*"
26+
paths = glob(pattern)
27+
paths.sort()
28+
29+
timeseries = []
30+
for p in paths[:45]:
31+
with open_file(p, mode="r") as f:
32+
timeseries.append(f["phase-contrast"][:])
33+
timeseries = np.stack(timeseries)
34+
return timeseries
35+
36+
37+
def main():
38+
timeseries = load_data()
39+
embedding_path = "./embeddings/embeddings-tracking.zarr"
40+
41+
# _check_tracking(timeseries, embedding_path)
42+
annotator_tracking(timeseries, embedding_path=embedding_path)
43+
44+
45+
if __name__ == "__main__":
46+
main()
Lines changed: 124 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,124 @@
1+
# Example for a small application implemented using napari and the micro_sam library:
2+
# Iterate over a series of images in a folder and provide annotations with SAM.
3+
4+
import os
5+
from glob import glob
6+
7+
import imageio
8+
import micro_sam.util as util
9+
import napari
10+
import numpy as np
11+
12+
from magicgui import magicgui
13+
from micro_sam.segment_from_prompts import segment_from_points
14+
from micro_sam.sam_annotator.util import create_prompt_menu, prompt_layer_to_points
15+
from napari import Viewer
16+
17+
18+
@magicgui(call_button="Segment Object [S]")
19+
def segment_wigdet(v: Viewer):
20+
points, labels = prompt_layer_to_points(v.layers["prompts"])
21+
seg = segment_from_points(PREDICTOR, points, labels)
22+
v.layers["segmented_object"].data = seg.squeeze()
23+
v.layers["segmented_object"].refresh()
24+
25+
26+
def image_series_annotator(image_paths, embedding_save_path, output_folder):
27+
global PREDICTOR
28+
29+
os.makedirs(output_folder, exist_ok=True)
30+
31+
# get the sam predictor and precompute the image embeddings
32+
PREDICTOR = util.get_sam_model()
33+
images = np.stack([imageio.imread(p) for p in image_paths])
34+
image_embeddings = util.precompute_image_embeddings(PREDICTOR, images, save_path=embedding_save_path)
35+
util.set_precomputed(PREDICTOR, image_embeddings, i=0)
36+
37+
v = napari.Viewer()
38+
39+
# add the first image
40+
next_image_id = 0
41+
v.add_image(images[0], name="image")
42+
43+
# add a layer for the segmented object
44+
v.add_labels(data=np.zeros(images.shape[1:], dtype="uint32"), name="segmented_object")
45+
46+
# create the point layer for the sam prompts and add the widget for toggling the points
47+
labels = ["positive", "negative"]
48+
prompts = v.add_points(
49+
data=[[0.0, 0.0], [0.0, 0.0]], # FIXME workaround
50+
name="prompts",
51+
properties={"label": labels},
52+
edge_color="label",
53+
edge_color_cycle=["green", "red"],
54+
symbol="o",
55+
face_color="transparent",
56+
edge_width=0.5,
57+
size=12,
58+
ndim=2,
59+
)
60+
prompts.data = []
61+
prompts.edge_color_mode = "cycle"
62+
prompt_widget = create_prompt_menu(prompts, labels)
63+
v.window.add_dock_widget(prompt_widget)
64+
65+
# toggle the points between positive / negative
66+
@v.bind_key("t")
67+
def toggle_label(event=None):
68+
# get the currently selected label
69+
current_properties = prompts.current_properties
70+
current_label = current_properties["label"][0]
71+
new_label = "negative" if current_label == "positive" else "positive"
72+
current_properties["label"] = np.array([new_label])
73+
prompts.current_properties = current_properties
74+
prompts.refresh()
75+
prompts.refresh_colors()
76+
77+
# bind the segmentation to a key 's'
78+
@v.bind_key("s")
79+
def _segmet(v):
80+
segment_wigdet(v)
81+
82+
#
83+
# the functionality for saving segmentations and going to the next image
84+
#
85+
86+
def _save_segmentation(seg, output_folder, image_path):
87+
fname = os.path.basename(image_path)
88+
save_path = os.path.join(output_folder, os.path.splitext(fname)[0] + ".tif")
89+
imageio.imwrite(save_path, seg)
90+
91+
def _next(v):
92+
nonlocal next_image_id
93+
v.layers["image"].data = images[next_image_id]
94+
util.set_precomputed(PREDICTOR, image_embeddings, i=next_image_id)
95+
96+
v.layers["segmented_object"].data = np.zeros(images[0].shape, dtype="uint32")
97+
v.layers["prompts"].data = []
98+
99+
next_image_id += 1
100+
if next_image_id >= images.shape[0]:
101+
print("Last image!")
102+
103+
@v.bind_key("n")
104+
def next_image(v):
105+
seg = v.layers["segmented_object"].data
106+
if seg.max() == 0:
107+
print("This image has not been segmented yet, doing nothing!")
108+
return
109+
110+
_save_segmentation(seg, output_folder, image_paths[next_image_id - 1])
111+
_next(v)
112+
113+
napari.run()
114+
115+
116+
# this uses data from the cell tracking challenge as example data
117+
# see 'sam_annotator_tracking' for examples
118+
def main():
119+
image_paths = sorted(glob("./data/DIC-C2DH-HeLa/train/01/*.tif"))[:50]
120+
image_series_annotator(image_paths, "./embeddings/embeddings-ctc.zarr", "segmented-series")
121+
122+
123+
if __name__ == "__main__":
124+
main()

examples/instance_segmentation.py

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
import micro_sam.util as util
2+
import napari
3+
4+
from elf.io import open_file
5+
from micro_sam.segment_instances import segment_from_embeddings
6+
from micro_sam.visualization import compute_pca
7+
8+
9+
def mito_segmentation():
10+
input_path = "./data/Lucchi++/Test_In"
11+
with open_file(input_path) as f:
12+
raw = f["*.png"][-1, :768, :768]
13+
14+
predictor = util.get_sam_model()
15+
image_embeddings = util.precompute_image_embeddings(predictor, raw, "./embeddings/embeddings-mito2d.zarr")
16+
embedding_pca = compute_pca(image_embeddings["features"])
17+
18+
seg, initial_seg = segment_from_embeddings(predictor, image_embeddings=image_embeddings, return_initial_seg=True)
19+
20+
v = napari.Viewer()
21+
v.add_image(raw)
22+
v.add_image(embedding_pca, scale=(12, 12))
23+
v.add_labels(seg)
24+
v.add_labels(initial_seg)
25+
napari.run()
26+
27+
28+
def cell_segmentation():
29+
path = "./DIC-C2DH-HeLa/train/01"
30+
with open_file(path, mode="r") as f:
31+
timeseries = f["*.tif"][:50]
32+
33+
frame = 11
34+
35+
predictor = util.get_sam_model()
36+
image_embeddings = util.precompute_image_embeddings(predictor, timeseries, "./embeddings/embeddings-ctc.zarr")
37+
embedding_pca = compute_pca(image_embeddings["features"][frame])
38+
39+
seg, initial_seg = segment_from_embeddings(
40+
predictor, image_embeddings=image_embeddings, i=frame, return_initial_seg=True
41+
)
42+
43+
v = napari.Viewer()
44+
v.add_image(timeseries[frame])
45+
v.add_image(embedding_pca, scale=(8, 8))
46+
v.add_labels(seg)
47+
v.add_labels(initial_seg)
48+
napari.run()
49+
50+
51+
def main():
52+
# automatic segmentation for the data from Lucchi et al. (see 'sam_annotator_3d.py')
53+
# mito_segmentation()
54+
55+
# automatic segmentation for data from the cell tracking challenge (see 'sam_annotator_tracking.py')
56+
cell_segmentation()
57+
58+
59+
if __name__ == "__main__":
60+
main()

examples/sam_annotator_2d.py

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,21 @@
22
from micro_sam.sam_annotator import annotator_2d
33

44

5-
def main():
5+
# TODO describe how to get the data and don't use hard-coded system path
6+
def livecell_annotator():
67
im = imageio.imread(
78
"/home/pape/Work/data/incu_cyte/livecell/images/livecell_test_images/A172_Phase_C7_1_01d04h00m_4.tif"
89
)
910
embedding_path = "./embeddings/embeddings-livecell_cropped.zarr"
10-
annotator_2d(im, embedding_path)
11+
annotator_2d(im, embedding_path, show_embeddings=True)
12+
13+
14+
def main():
15+
# 2d annotator for livecell data
16+
# livecell_annotator()
17+
18+
# 2d annotator for cell tracking challenge hela data
19+
hela_2d_annotator()
1120

1221

1322
if __name__ == "__main__":

examples/sam_annotator_tracking.py

Lines changed: 9 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -1,41 +1,18 @@
1-
from glob import glob
2-
3-
import h5py
4-
import numpy as np
1+
from elf.io import open_file
52
from micro_sam.sam_annotator import annotator_tracking
63

74

8-
# FOR DEBUGGING / DEVELOPMENT
9-
def _check_tracking(timeseries, embedding_path):
10-
import micro_sam.util as util
11-
from micro_sam.sam_annotator.annotator_tracking import _track_from_prompts
12-
13-
predictor = util.get_sam_model()
14-
image_embeddings = util.precompute_image_embeddings(predictor, timeseries, embedding_path)
15-
16-
# seg = np.zeros(timeseries.shape, dtype="uint32")
17-
seg = np.load("./seg.npy")
18-
assert seg.shape == timeseries.shape
19-
slices = np.array([0])
20-
stop_upper = False
21-
22-
_track_from_prompts(seg, predictor, slices, image_embeddings, stop_upper, threshold=0.5, method="bounding_box")
5+
# TODO describe how to get the data from CTC
6+
def track_ctc_data():
7+
path = "./data/DIC-C2DH-HeLa/train/01"
8+
with open_file(path, mode="r") as f:
9+
timeseries = f["*.tif"][:50]
10+
annotator_tracking(timeseries, embedding_path="./embeddings/embeddings-ctc.zarr")
2311

2412

2513
def main():
26-
pattern = "/home/pape/Work/data/incu_cyte/carmello/videos/MiaPaCa_flat_B3-3_registered/image-*"
27-
paths = glob(pattern)
28-
paths.sort()
29-
30-
timeseries = []
31-
for p in paths[:45]:
32-
with h5py.File(p) as f:
33-
timeseries.append(f["phase-contrast"][:])
34-
timeseries = np.stack(timeseries)
35-
36-
embedding_path = "./embeddings/embeddings-tracking.zarr"
37-
# _check_tracking(timeseries, embedding_path)
38-
annotator_tracking(timeseries, embedding_path=embedding_path)
14+
# run interactive tracking for data from the cell tracking challenge
15+
track_ctc_data()
3916

4017

4118
if __name__ == "__main__":

0 commit comments

Comments
 (0)