Skip to content

Commit 78f1c8e

Browse files
authored
Minor updates to example script docs (#854)
1 parent 5a6616d commit 78f1c8e

File tree

4 files changed

+20
-17
lines changed

4 files changed

+20
-17
lines changed

.gitignore

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -180,3 +180,7 @@ logs/
180180

181181
# "gpu_jobs" folder where slurm batch submission scripts are saved
182182
gpu_jobs/
183+
184+
# Additional stuff to ignore
185+
data/
186+
embeddings/

examples/README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
# Examples
22

3-
Examples for using the micro_sam annotation tools:
4-
- `annotator_2d.py`: run the interactive 2d annotation tool
5-
- `annotator_3d.py`: run the interactive 3d annotation tool
6-
- `annotator_tracking.py`: run the interactive tracking annotation tool
7-
- `image_series_annotator.py`: run the annotation tool for a series of images
3+
Examples for using the `micro_sam` annotation tools:
4+
- `annotator_2d.py`: run the interactive 2d annotation tool.
5+
- `annotator_3d.py`: run the interactive 3d annotation tool.
6+
- `annotator_tracking.py`: run the interactive tracking annotation tool.
7+
- `image_series_annotator.py`: run the annotation tool for a series of images.
88

9-
TODO reference the notebooks as recommended examples for using the python library for things below
9+
We provide Jupyter Notebooks for using automatic segmentation and finetuning on some example data in the [notebooks](../notebooks/) folder.
1010

1111
The folder `finetuning` contains example scripts that show how a Segment Anything model can be fine-tuned
12-
on custom data with the `micro_sam.train` library, and how the finetuned models can then be used within the annotatin tools.
12+
on custom data with the `micro_sam.train` library, and how the finetuned models can then be used within the annotation tools.
1313

1414
The folder `use_as_library` contains example scripts that show how `micro_sam` can be used as a python
15-
library to apply Segment Anything to mult-dimensional data.
15+
library to apply Segment Anything to multi-dimensional data.

examples/finetuning/README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
# Example for model finetuning
22

33
This folder contains example scripts that show how to finetune a SAM model on your own data and how the finetuned model can then be used:
4-
- `finetune_hela.py`: Shows how to finetune the model on new data. Set `train_instance_segmentation` (line 165) to `True` in order to also train a decoder for automatic instance segmentation.
4+
- `finetune_hela.py`: Shows how to finetune the model on new data. Set `train_instance_segmentation` (line 130) to `True` in order to also train a decoder for automatic instance segmentation.
55
- `annotator_with_finetuned_model.py`: Use the finetuned model in the 2d annotator.
6-
- `instance_segmentation_with_finetuned_model`: Use the finetuned model for automatic instance segmentation (only if you have trained with `train_instance_segmentation = True`).

examples/finetuning/finetune_hela.py

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -17,10 +17,10 @@
1717
def get_dataloader(split, patch_shape, batch_size, train_instance_segmentation):
1818
"""Return train or val data loader for finetuning SAM.
1919
20-
The data loader must be a torch data loader that retuns `x, y` tensors,
20+
The data loader must be a torch data loader that returns `x, y` tensors,
2121
where `x` is the image data and `y` are the labels.
2222
The labels have to be in a label mask instance segmentation format.
23-
I.e. a tensor of the same spatial shape as `x`, with each object mask having its own ID.
23+
i.e. a tensor of the same spatial shape as `x`, with each object mask having its own ID.
2424
Important: the ID 0 is reseved for background, and the IDs must be consecutive
2525
2626
Here, we use `torch_em.default_segmentation_loader` for creating a suitable data loader from
@@ -34,12 +34,12 @@ def get_dataloader(split, patch_shape, batch_size, train_instance_segmentation):
3434
image_dir = fetch_tracking_example_data(DATA_FOLDER)
3535
segmentation_dir = fetch_tracking_segmentation_data(DATA_FOLDER)
3636

37-
# torch_em.default_segmentation_loader is a convenience function to build a torch dataloader
37+
# 'torch_em.default_segmentation_loader' is a convenience function to build a torch dataloader
3838
# from image data and labels for training segmentation models.
3939
# It supports image data in various formats. Here, we load image data and labels from the two
4040
# folders with tif images that were downloaded by the example data functionality, by specifying
4141
# `raw_key` and `label_key` as `*.tif`. This means all images in the respective folders that end with
42-
# .tif will be loadded.
42+
# .tif will be loaded.
4343
# The function supports many other file formats. For example, if you have tif stacks with multiple slices
4444
# instead of multiple tif images in a foldder, then you can pass raw_key=label_key=None.
4545

@@ -82,7 +82,7 @@ def run_training(checkpoint_name, model_type, train_instance_segmentation):
8282
batch_size = 1 # the training batch size
8383
patch_shape = (1, 512, 512) # the size of patches for training
8484
n_objects_per_batch = 25 # the number of objects per batch that will be sampled
85-
device = torch.device("cuda") # the device/GPU used for training
85+
device = torch.device("cuda") # the device used for training
8686

8787
# Get the dataloaders.
8888
train_loader = get_dataloader("train", patch_shape, batch_size, train_instance_segmentation)
@@ -103,7 +103,7 @@ def run_training(checkpoint_name, model_type, train_instance_segmentation):
103103

104104
def export_model(checkpoint_name, model_type):
105105
"""Export the trained model."""
106-
# export the model after training so that it can be used by the rest of the micro_sam library
106+
# export the model after training so that it can be used by the rest of the 'micro_sam' library
107107
export_path = "./finetuned_hela_model.pth"
108108
checkpoint_path = os.path.join("checkpoints", checkpoint_name, "best.pt")
109109
export_custom_sam_model(
@@ -117,7 +117,7 @@ def main():
117117
"""Finetune a Segment Anything model.
118118
119119
This example uses image data and segmentations from the cell tracking challenge,
120-
but can easily be adapted for other data (including data you have annoated with micro_sam beforehand).
120+
but can easily be adapted for other data (including data you have annotated with micro_sam beforehand).
121121
"""
122122
# The model_type determines which base model is used to initialize the weights that are finetuned.
123123
# We use vit_b here because it can be trained faster. Note that vit_h usually yields higher quality results.

0 commit comments

Comments
 (0)