Skip to content

Commit bf4da12

Browse files
committed
doc
1 parent a985102 commit bf4da12

File tree

1 file changed

+21
-19
lines changed

1 file changed

+21
-19
lines changed

docs/res/guides/detailed_walkthrough.rst

Lines changed: 21 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -8,15 +8,22 @@ The following guide will show in details how to use the plugin's workflow, start
88
Preparing images and labels
99
-------------------------------
1010

11-
To get started with the entire workflow, you'll need at least one pair of image and corresponding labels;
11+
CellSeg3D was designed for cleared-brain tissue data (collected on mesoSPIM ligthsheet systems). Specifically, we provide a series
12+
of deep learning models that we have found to work well on cortical whole-neuron data. We also provide support for MONAI models, and
13+
we have ported TRAILMAP to PyTorch and trained the model on mesoSPIM collected data. We provide all the tooling for you to use these
14+
weights and also perform transfer learning by fine-tuning the model(s) on your data for even better performance!
15+
16+
To get started with the entire workflow (i.e., fine-tuning on your data), you'll need at least one pair of image and corresponding labels;
1217
let's assume you have part of a cleared brain from mesoSPIM imaging as a large .tif file.
1318

19+
If you want to test the models "as is", please see "Inference" sections in our docs.
20+
1421

1522
.. figure:: ../images/init_image_labels.png
1623
:scale: 40 %
1724
:align: center
1825

19-
Example of an anisotropic volume and its associated labels.
26+
Example of an anisotropic volume (i.e., often times the z resolution is not the same as x and y) and its associated labels.
2027

2128

2229
.. note::
@@ -29,7 +36,7 @@ Cropping
2936
*****************
3037

3138
To reduce memory requirements and build a dataset from a single, large volume,
32-
you can use the **cropping** tool to extract multiple smaller images from a large volume.
39+
you can use the **cropping** tool to extract multiple smaller images from a large volume for training.
3340

3441
Simply load your image and labels (by checking the "Crop labels simultaneously" option),
3542
and select the volume size you desire to use.
@@ -75,24 +82,24 @@ Models for object detection
7582
Training
7683
*****************
7784

78-
If you have a dataset of reasonably sized images with semantic labels, you're all set !
79-
First of all, load your data by inputting the paths to images and labels, as well as where you want the results to be saved.
80-
81-
There are a few more options on this tab :
85+
If you have a dataset of reasonably sized images (see cropping above) with semantic labels, you're all set to proceed!
86+
First, load your data by inputting the paths to images and labels, as well as where you want the results to be saved.
8287

83-
* Save as zip : simply copies the results in a zip archive for easier transfer
88+
There are a few more options on this tab:
8489

8590
* Transfer weights : you can start the model with our pre-trained weights if your dataset comes from cleared brain tissue
86-
image by mesoSPIM as well. If you have your own weights for the provided models, you can also choose to load them by
87-
checking the related option; simply make sure they are compatible with the model.
91+
imaged by a mesoSPIM (or other lightsheet). If you have your own weights for the provided models, you can also choose to load them by
92+
checking the related option; simply make sure they are compatible with the model you selected.
8893

8994
To import your own model, see : :ref:`custom_model_guide`, please note this is still a WIP.
9095

9196
* Validation proportion : the percentage is listed is how many images will be used for training versus validation.
9297
Validation can work with as little as one image, however performance will greatly improve the more images there are.
93-
Use 90% only if you have a very small dataset (less than 5 images)
98+
Use 90% only if you have a very small dataset (less than 5 images).
9499

95-
With this, we can switch to the next tab : data augmentation.
100+
* Save as zip : simply copies the results in a zip archive for easier transfer.
101+
102+
Now, we can switch to the next tab : data augmentation.
96103

97104
If you have cropped cubic images with a power of two as the edge length, you do not need to extract patches,
98105
your images are usable as is.
@@ -161,7 +168,7 @@ To start, simply choose which folder of images you'd like to run inference on, t
161168
Then, select the model you trained (see note below for SegResNet), and load your weights from training.
162169

163170
.. note::
164-
If you trained a SegResNet, set the counter below the model choice to the size of the images you trained the model on.
171+
If you already trained a SegResNet, set the counter below the model choice to the size of the images you trained the model on.
165172
(Either use the size of the image itself if you did not extract patches, or the size of the nearest superior power of two of the patches you extracted)
166173

167174
Example :
@@ -171,7 +178,7 @@ Then, select the model you trained (see note below for SegResNet), and load your
171178

172179

173180
Next, you can choose to use window inference, use this if you have very large images.
174-
Please note that using too small a window might degrade performance, set the size appropriately.
181+
Please note that using too small of a window might degrade performance, set the size appropriately.
175182

176183
You can also keep the dataset on the CPU to reduce memory usage, but this might slow down the inference process.
177184

@@ -294,8 +301,3 @@ for the plots to work.
294301
.. _notebooks folder of the repository: https://github.com/AdaptiveMotorControlLab/CellSeg3d/tree/main/notebooks
295302

296303
With this complete, you can repeat the workflow as needed.
297-
298-
299-
300-
301-

0 commit comments

Comments
 (0)