Skip to content

Commit 7e82cec

Browse files
Minor update to environment file and other docs (#972)
Minor update to environment file and docs --------- Co-authored-by: Constantin Pape <[email protected]>
1 parent 4136634 commit 7e82cec

File tree

12 files changed

+26
-26
lines changed

12 files changed

+26
-26
lines changed

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ We implement napari applications for:
1919
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/dfca3d9b-dba5-440b-b0f9-72a0683ac410" width="256">
2020
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/aefbf99f-e73a-4125-bb49-2e6592367a64" width="256">
2121

22-
If you run into any problems or have questions regarding our tool please open an issue on Github or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam` and tagging @constantinpape.
22+
If you run into any problems or have questions regarding our tool please open an [issue](https://github.com/computational-cell-analytics/micro-sam/issues/new/choose) on Github or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam`, and tagging [@constantinpape](https://forum.image.sc/u/constantinpape/summary) and [@anwai98](https://forum.image.sc/u/anwai98/summary).
2323

2424

2525
## Installation and Usage
@@ -31,7 +31,7 @@ Please check [the documentation](https://computational-cell-analytics.github.io/
3131

3232
We welcome new contributions!
3333

34-
If you are interested in contributing to micro-sam, please see the [contributing guide](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#contribution-guide). The first step is to [discuss your idea in a new issue](https://github.com/computational-cell-analytics/micro-sam/issues/new) with the current developers.
34+
If you are interested in contributing to `micro-sam`, please see the [contributing guide](https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#contribution-guide). The first step is to [discuss your idea in a new issue](https://github.com/computational-cell-analytics/micro-sam/issues/new) with the current developers.
3535

3636

3737
## Citation
@@ -50,12 +50,12 @@ There are a few other napari plugins build around Segment Anything:
5050
- https://github.com/hiroalchem/napari-SAM4IS
5151

5252
Compared to these we support more applications (2d, 3d and tracking), and provide finetuning methods and finetuned models for microscopy data.
53-
[WebKnossos](https://webknossos.org/) also offers integration of SegmentAnything for interactive segmentation.
53+
[WebKnossos](https://webknossos.org/) and [QuPath](https://qupath.github.io/) also offer integration of Segment Anything for interactive segmentation.
5454

5555
We have also built follow-up work that is based on `micro_sam`:
56-
- https://github.com/computational-cell-analytics/patho-sam - improves SAM for histopathology
57-
- https://github.com/computational-cell-analytics/medico-sam - improves it for medical imaging
58-
- https://github.com/computational-cell-analytics/peft-sam - studies parameter efficient fine-tuning for SAM
56+
- https://github.com/computational-cell-analytics/patho-sam - improves SAM for histopathology.
57+
- https://github.com/computational-cell-analytics/medico-sam - improves SAM for medical imaging.
58+
- https://github.com/computational-cell-analytics/peft-sam - studies parameter efficient fine-tuning for SAM.
5959

6060
## Release Oveverview
6161

doc/start_page.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ After installing `micro_sam`, you can start napari from within your environment
3232
```bash
3333
$ napari
3434
```
35-
After starting napari, you can select the annotation tool you want to use from `Plugins -> SegmentAnything for Microscopy`. Check out the [quickstart tutorial video](https://youtu.be/gcv0fa84mCc) for a short introduction, the video of our [virtual I2K tutorial](https://www.youtube.com/watch?v=dxjU4W7bCis&list=PLdA9Vgd1gxTbvxmtk9CASftUOl_XItjDN&index=33) for an in-depth explanation and [the annotation tool section](#annotation-tools) for details.
35+
After starting napari, you can select the annotation tool you want to use from `Plugins -> Segment Anything for Microscopy`. Check out the [quickstart tutorial video](https://youtu.be/gcv0fa84mCc) for a short introduction, the video of our [virtual I2K tutorial](https://www.youtube.com/watch?v=dxjU4W7bCis&list=PLdA9Vgd1gxTbvxmtk9CASftUOl_XItjDN&index=33) for an in-depth explanation and [the annotation tool section](#annotation-tools) for details.
3636

3737
The `micro_sam` python library can be imported via
3838

environment.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ dependencies:
55
- nifty >=1.2.3
66
- imagecodecs
77
- magicgui
8-
- napari >=0.5.0
8+
- napari >=0.5.0,<0.6.0
99
- natsort
1010
- pip
1111
- pooch
@@ -23,7 +23,7 @@ dependencies:
2323
- pytorch >=2.5
2424
- segment-anything
2525
- torchvision
26-
- torch_em >=0.7.7
26+
- torch_em >=0.7.8
2727
- tqdm
2828
- timm
2929
- xarray <2025.3.0

examples/use_as_library/instance_segmentation.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,10 +18,10 @@ def cell_segmentation():
1818
predictor = util.get_sam_model()
1919
embeddings = util.precompute_image_embeddings(predictor, image, save_path=embedding_path)
2020

21-
# Use the instance segmentation logic of SegmentAnything.
21+
# Use the instance segmentation logic of Segment Anything.
2222
# This works by covering the image with a grid of points, getting the masks for all the poitns
2323
# and only keeping the plausible ones (according to the model predictions).
24-
# While the functionality here does the same as the implementation from SegmentAnything,
24+
# While the functionality here does the same as the implementation from Segment Anything,
2525
# we enable changing the hyperparameters, e.g. 'pred_iou_thresh', without recomputing masks and embeddings,
2626
# to support (interactive) evaluation of different hyperparameters.
2727

@@ -60,7 +60,7 @@ def cell_segmentation_with_tiling():
6060
predictor, image, save_path=embedding_path, tile_shape=(1024, 1024), halo=(256, 256)
6161
)
6262

63-
# Use the instance segmentation logic of SegmentAnything.
63+
# Use the instance segmentation logic of Segment Anything.
6464
# This works by covering the image with a grid of points, getting the masks for all the poitns
6565
# and only keeping the plausible ones (according to the model predictions).
6666
# The functionality here is similar to the instance segmentation in Segment Anything,

micro_sam/evaluation/inference.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ def precompute_all_embeddings(
138138
To enable running different inference tasks in parallel afterwards.
139139
140140
Args:
141-
predictor: The SegmentAnything predictor.
141+
predictor: The Segment Anything predictor.
142142
image_paths: The image file paths.
143143
embedding_dir: The directory where the embeddings will be saved.
144144
"""
@@ -267,7 +267,7 @@ def run_inference_with_prompts(
267267
"""Run segment anything inference for multiple images using prompts derived from groundtruth.
268268
269269
Args:
270-
predictor: The SegmentAnything predictor.
270+
predictor: The Segment Anything predictor.
271271
image_paths: The image file paths.
272272
gt_paths: The ground-truth segmentation file paths.
273273
embedding_dir: The directory where the image embddings will be saved or are already saved.

micro_sam/precompute_state.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -241,7 +241,7 @@ def precompute_state(
241241
output_path: The output path where the embeddings and other state will be saved.
242242
pattern: Glob pattern to select files in a folder. The embeddings will be computed
243243
for each of these files. To select all files in a folder pass "*".
244-
model_type: The SegmentAnything model to use. Will use the standard vit_l model by default.
244+
model_type: The Segment Anything model to use. Will use the `vit_b_lm` model by default.
245245
checkpoint_path: Path to a checkpoint for a custom model.
246246
key: The key to the input file. This is needed for contaner files (e.g. hdf5 or zarr)
247247
or to load several images as 3d volume. Provide a glob pattern, e.g. "*.tif", for this case.

micro_sam/sam_annotator/annotator_tracking.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -213,7 +213,7 @@ def annotator_tracking(
213213
If `None` then the whole image is passed to Segment Anything.
214214
halo: Shape of the overlap between tiles, which is needed to segment objects on tile boarders.
215215
return_viewer: Whether to return the napari viewer to further modify it before starting the tool.
216-
viewer: The viewer to which the SegmentAnything functionality should be added.
216+
viewer: The viewer to which the Segment Anything functionality should be added.
217217
This enables using a pre-initialized viewer.
218218
precompute_amg_state: Whether to precompute the state for automatic mask generation.
219219
This will take more time when precomputing embeddings, but will then make

micro_sam/sam_annotator/image_series_annotator.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -154,7 +154,7 @@ def image_series_annotator(
154154
tile_shape: Shape of tiles for tiled embedding prediction.
155155
If `None` then the whole image is passed to Segment Anything.
156156
halo: Shape of the overlap between tiles, which is needed to segment objects on tile boarders.
157-
viewer: The viewer to which the SegmentAnything functionality should be added.
157+
viewer: The viewer to which the Segment Anything functionality should be added.
158158
This enables using a pre-initialized viewer.
159159
return_viewer: Whether to return the napari viewer to further modify it before starting the tool.
160160
precompute_amg_state: Whether to precompute the state for automatic mask generation.
@@ -339,7 +339,7 @@ def image_folder_annotator(
339339
output_folder: The folder where the segmentation results are saved.
340340
pattern: The glob patter for loading files from `input_folder`.
341341
By default all files will be loaded.
342-
viewer: The viewer to which the SegmentAnything functionality should be added.
342+
viewer: The viewer to which the Segment Anything functionality should be added.
343343
This enables using a pre-initialized viewer.
344344
return_viewer: Whether to return the napari viewer to further modify it before starting the tool.
345345
kwargs: The keyword arguments for `micro_sam.sam_annotator.image_series_annotator`.

micro_sam/training/trainable_sam.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,10 +10,10 @@
1010

1111
# simple wrapper around SAM in order to keep things trainable
1212
class TrainableSAM(nn.Module):
13-
"""Wrapper to make the SegmentAnything model trainable.
13+
"""Wrapper to make the Segment Anything model trainable.
1414
1515
Args:
16-
sam: The SegmentAnything Model.
16+
sam: The Segment Anything Model.
1717
"""
1818

1919
def __init__(self, sam: Sam) -> None:

micro_sam/training/util.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@ def get_trainable_sam_model(
150150

151151

152152
class ConvertToSamInputs:
153-
"""Convert outputs of data loader to the expected batched inputs of the SegmentAnything model.
153+
"""Convert outputs of data loader to the expected batched inputs of the Segment Anything model.
154154
155155
Args:
156156
transform: The transformation to resize the prompts. Should be the same transform used in the

0 commit comments

Comments
 (0)