You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/res/guides/detailed_walkthrough.rst
+21-19Lines changed: 21 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,15 +8,22 @@ The following guide will show in details how to use the plugin's workflow, start
8
8
Preparing images and labels
9
9
-------------------------------
10
10
11
-
To get started with the entire workflow, you'll need at least one pair of image and corresponding labels;
11
+
CellSeg3D was designed for cleared-brain tissue data (collected on mesoSPIM ligthsheet systems). Specifically, we provide a series
12
+
of deep learning models that we have found to work well on cortical whole-neuron data. We also provide support for MONAI models, and
13
+
we have ported TRAILMAP to PyTorch and trained the model on mesoSPIM collected data. We provide all the tooling for you to use these
14
+
weights and also perform transfer learning by fine-tuning the model(s) on your data for even better performance!
15
+
16
+
To get started with the entire workflow (i.e., fine-tuning on your data), you'll need at least one pair of image and corresponding labels;
12
17
let's assume you have part of a cleared brain from mesoSPIM imaging as a large .tif file.
13
18
19
+
If you want to test the models "as is", please see "Inference" sections in our docs.
20
+
14
21
15
22
.. figure:: ../images/init_image_labels.png
16
23
:scale:40 %
17
24
:align:center
18
25
19
-
Example of an anisotropic volume and its associated labels.
26
+
Example of an anisotropic volume (i.e., often times the z resolution is not the same as x and y) and its associated labels.
20
27
21
28
22
29
.. note::
@@ -29,7 +36,7 @@ Cropping
29
36
*****************
30
37
31
38
To reduce memory requirements and build a dataset from a single, large volume,
32
-
you can use the **cropping** tool to extract multiple smaller images from a large volume.
39
+
you can use the **cropping** tool to extract multiple smaller images from a large volume for training.
33
40
34
41
Simply load your image and labels (by checking the "Crop labels simultaneously" option),
35
42
and select the volume size you desire to use.
@@ -75,24 +82,24 @@ Models for object detection
75
82
Training
76
83
*****************
77
84
78
-
If you have a dataset of reasonably sized images with semantic labels, you're all set !
79
-
First of all, load your data by inputting the paths to images and labels, as well as where you want the results to be saved.
80
-
81
-
There are a few more options on this tab :
85
+
If you have a dataset of reasonably sized images (see cropping above) with semantic labels, you're all set to proceed!
86
+
First, load your data by inputting the paths to images and labels, as well as where you want the results to be saved.
82
87
83
-
* Save as zip : simply copies the results in a zip archive for easier transfer
88
+
There are a few more options on this tab:
84
89
85
90
* Transfer weights : you can start the model with our pre-trained weights if your dataset comes from cleared brain tissue
86
-
image by mesoSPIM as well. If you have your own weights for the provided models, you can also choose to load them by
87
-
checking the related option; simply make sure they are compatible with the model.
91
+
imaged by a mesoSPIM (or other lightsheet). If you have your own weights for the provided models, you can also choose to load them by
92
+
checking the related option; simply make sure they are compatible with the model you selected.
88
93
89
94
To import your own model, see : :ref:`custom_model_guide`, please note this is still a WIP.
90
95
91
96
* Validation proportion : the percentage is listed is how many images will be used for training versus validation.
92
97
Validation can work with as little as one image, however performance will greatly improve the more images there are.
93
-
Use 90% only if you have a very small dataset (less than 5 images)
98
+
Use 90% only if you have a very small dataset (less than 5 images).
94
99
95
-
With this, we can switch to the next tab : data augmentation.
100
+
* Save as zip : simply copies the results in a zip archive for easier transfer.
101
+
102
+
Now, we can switch to the next tab : data augmentation.
96
103
97
104
If you have cropped cubic images with a power of two as the edge length, you do not need to extract patches,
98
105
your images are usable as is.
@@ -161,7 +168,7 @@ To start, simply choose which folder of images you'd like to run inference on, t
161
168
Then, select the model you trained (see note below for SegResNet), and load your weights from training.
162
169
163
170
.. note::
164
-
If you trained a SegResNet, set the counter below the model choice to the size of the images you trained the model on.
171
+
If you already trained a SegResNet, set the counter below the model choice to the size of the images you trained the model on.
165
172
(Either use the size of the image itself if you did not extract patches, or the size of the nearest superior power of two of the patches you extracted)
166
173
167
174
Example :
@@ -171,7 +178,7 @@ Then, select the model you trained (see note below for SegResNet), and load your
171
178
172
179
173
180
Next, you can choose to use window inference, use this if you have very large images.
174
-
Please note that using too small a window might degrade performance, set the size appropriately.
181
+
Please note that using too small of a window might degrade performance, set the size appropriately.
175
182
176
183
You can also keep the dataset on the CPU to reduce memory usage, but this might slow down the inference process.
177
184
@@ -294,8 +301,3 @@ for the plots to work.
294
301
.. _notebooks folder of the repository: https://github.com/AdaptiveMotorControlLab/CellSeg3d/tree/main/notebooks
295
302
296
303
With this complete, you can repeat the workflow as needed.
0 commit comments