Skip to content

Commit 3cb942f

Browse files
Add README about new segmentation workflow
1 parent 6dc33cf commit 3cb942f

File tree

1 file changed

+27
-8
lines changed

1 file changed

+27
-8
lines changed

scripts/README.md

Lines changed: 27 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,41 @@
11
# Segmentation for large lightsheet volumes
22

3+
4+
## Installation
5+
6+
Needs [torch-em](https://github.com/constantinpape/torch-em) in the python environment. See [here](https://github.com/constantinpape/torch-em?tab=readme-ov-file#installation) for installation instructions. (If possible use `mamba` instead of `conda`.)
7+
After setting up the environment you also have to add support for the MoBIE python library via
8+
```
9+
conda install -c conda-forge mobie_utils
10+
```
11+
12+
313
## Training
414

515
Contains the scripts for training a U-Net that predicts foreground probabilties and normalized object distances.
616

17+
718
## Prediction
819

9-
Contains the scripts for running segmentation for a large volume with a distance prediction U-Net. (Other scripts are work in progress.)
20+
Contains the scripts for running segmentation for a large volume with a distance prediction U-Net, postprocessing the segmentation
21+
and exporting the segmentation result to MoBIE
1022

11-
You can run it like this for input that is stored in n5:
23+
To run the full segmentation workflow, including the export to MoBIE you can use the `segmentation_workflow.py` script as follows:
24+
```
25+
python segmentation_workflow.py -i /path/to/volume.xml -o /path/to/output_folder --scale 0 -m data_name --model /path/to/model.pt
26+
```
27+
28+
Here, `-i` must point to the xml file of the fused data exported from BigSticher, `-o` indicates the output folder where the MoBIE project with the semgentation result will be saved, `--scale` indicates the scale to use for the segmentation, `-m` the name of the data in MoBIE and `--model` the path to the segmentation model.
29+
30+
### Individual Workflow Steps
31+
32+
You can also run individual steps of the workflow, like prediction and segmentation:
33+
34+
You can run it like this for an input volume that is stored in n5, e.g. the fused export from bigstitcher:
1235
```
1336
python run_prediction_distance_unet.py -i /path/to/volume.n5 -k setup0/timepoint0/s0 -m /path/to/model -o /path/to/output_folder
1437
```
15-
Here, `-i` specifies the input filepath, `-o` the folder where the results are saved and `-k` the internal path for a zarr or n5 file.
38+
Here, `-i` specifies the input filepath, `-o` the folder where the results are saved and `-k` the internal path in the n5 file.
1639
The `-m` argument specifies the model to use for prediction. You need to give the path to the folder that contains the checkpoint (the `best.pt` file).
1740

1841
You can also run the script for a tif file. In this case you don't need the `-k` parameter:
@@ -31,8 +54,4 @@ to downsample the input by a factor of 2. Note that the segmentation result will
3154

3255
In addition, the script `postprocess_seg.py` can be used to filter out false positive nucleus segmentations from regions in the segmentation with a low density of segmented nuclei.
3356

34-
You can use the script `to_tif.py` to convert the zarr object to a tif volume for easier viewing (won't work for very large volumes!).
35-
36-
## Installation
37-
38-
Needs [torch-em](https://github.com/constantinpape/torch-em) in the python environment. See [here](https://github.com/constantinpape/torch-em?tab=readme-ov-file#installation) for installation instructions. (If possible use `mamba` instead of `conda`.)
57+
You can use the script `to_tif.py` to convert the zarr object to a tif volume for easier viewing (won't work for large volumes!).

0 commit comments

Comments
 (0)