You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/res/guides/detailed_walkthrough.rst
+29-24Lines changed: 29 additions & 24 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,20 +3,27 @@
3
3
Detailed walkthrough
4
4
===================================
5
5
6
-
The following guide will show in details how to use the plugin's workflow, starting from a large labeled volume.
6
+
The following guide will show you how to use the plugin's workflow, starting from human-labeled annotation volume, to running inference on novel volumes.
7
7
8
8
Preparing images and labels
9
9
-------------------------------
10
10
11
-
To get started with the entire workflow, you'll need at least one pair of image and corresponding labels;
11
+
CellSeg3D was designed for cleared-brain tissue data (collected on mesoSPIM ligthsheet systems). Specifically, we provide a series
12
+
of deep learning models that we have found to work well on cortical whole-neuron data. We also provide support for MONAI models, and
13
+
we have ported TRAILMAP to PyTorch and trained the model on mesoSPIM collected data. We provide all the tooling for you to use these
14
+
weights and also perform transfer learning by fine-tuning the model(s) on your data for even better performance!
15
+
16
+
To get started with the entire workflow (i.e., fine-tuning on your data), you'll need at least one pair of image and corresponding labels;
12
17
let's assume you have part of a cleared brain from mesoSPIM imaging as a large .tif file.
13
18
19
+
If you want to test the models "as is", please see "Inference" sections in our docs.
20
+
14
21
15
22
.. figure:: ../images/init_image_labels.png
16
23
:scale:40 %
17
24
:align:center
18
25
19
-
Example of an anisotropic volume and its associated labels.
26
+
Example of an anisotropic volume (i.e., often times the z resolution is not the same as x and y) and its associated labels.
20
27
21
28
22
29
.. note::
@@ -29,7 +36,7 @@ Cropping
29
36
*****************
30
37
31
38
To reduce memory requirements and build a dataset from a single, large volume,
32
-
you can use the **cropping** tool to extract multiple smaller images from a large volume.
39
+
you can use the **cropping** tool to extract multiple smaller images from a large volume for training.
33
40
34
41
Simply load your image and labels (by checking the "Crop labels simultaneously" option),
35
42
and select the volume size you desire to use.
@@ -75,24 +82,24 @@ Models for object detection
75
82
Training
76
83
*****************
77
84
78
-
If you have a dataset of reasonably sized images with semantic labels, you're all set !
79
-
First of all, load your data by inputting the paths to images and labels, as well as where you want the results to be saved.
80
-
81
-
There are a few more options on this tab :
85
+
If you have a dataset of reasonably sized images (see cropping above) with semantic labels, you're all set to proceed!
86
+
First, load your data by inputting the paths to images and labels, as well as where you want the results to be saved.
82
87
83
-
* Save as zip : simply copies the results in a zip archive for easier transfer
88
+
There are a few more options on this tab:
84
89
85
90
* Transfer weights : you can start the model with our pre-trained weights if your dataset comes from cleared brain tissue
86
-
image by mesoSPIM as well. If you have your own weights for the provided models, you can also choose to load them by
87
-
checking the related option; simply make sure they are compatible with the model.
91
+
imaged by a mesoSPIM (or other lightsheet). If you have your own weights for the provided models, you can also choose to load them by
92
+
checking the related option; simply make sure they are compatible with the model you selected.
88
93
89
94
To import your own model, see : :ref:`custom_model_guide`, please note this is still a WIP.
90
95
91
96
* Validation proportion : the percentage is listed is how many images will be used for training versus validation.
92
97
Validation can work with as little as one image, however performance will greatly improve the more images there are.
93
-
Use 90% only if you have a very small dataset (less than 5 images)
98
+
Use 90% only if you have a very small dataset (less than 5 images).
94
99
95
-
With this, we can switch to the next tab : data augmentation.
100
+
* Save as zip : simply copies the results in a zip archive for easier transfer.
101
+
102
+
Now, we can switch to the next tab : data augmentation.
96
103
97
104
If you have cropped cubic images with a power of two as the edge length, you do not need to extract patches,
98
105
your images are usable as is.
@@ -109,11 +116,14 @@ In most cases this should left enabled.
109
116
110
117
Finally, the last tab lets you choose :
111
118
112
-
* The model
119
+
* The models
113
120
114
-
* SegResNet is a lightweight model (low memory requirements) with decent performance.
115
-
* TRAILMAP is a recent model trained for axonal detection in cleared tissue; use it if your dataset is similar
116
-
* VNet is a possibly more performant model than SegResnet but requires much more memory
121
+
* SegResNet is a lightweight model (low memory requirements) from MONAI originally designed for 3D fMRI data.
122
+
* VNet is a heavier (than SegResNet) CNN from MONAI designed for medical image segmentation.
123
+
* TRAILMAP is our PyTorch implementation of a 3D CNN model trained for axonal detection in cleared tissue.
124
+
* TRAILMAP-MS is our implementation in PyTorch additionally trained on mouse cortical neural nuclei from mesoSPIM data.
125
+
* Note, the code is very modular, so it is relatively straightforward to use (and contribute) your model as well.
126
+
117
127
118
128
* The loss : for object detection in 3D volumes you'll likely want to use the Dice or Dice-focal Loss.
119
129
@@ -161,7 +171,7 @@ To start, simply choose which folder of images you'd like to run inference on, t
161
171
Then, select the model you trained (see note below for SegResNet), and load your weights from training.
162
172
163
173
.. note::
164
-
If you trained a SegResNet, set the counter below the model choice to the size of the images you trained the model on.
174
+
If you already trained a SegResNet, set the counter below the model choice to the size of the images you trained the model on.
165
175
(Either use the size of the image itself if you did not extract patches, or the size of the nearest superior power of two of the patches you extracted)
166
176
167
177
Example :
@@ -171,7 +181,7 @@ Then, select the model you trained (see note below for SegResNet), and load your
171
181
172
182
173
183
Next, you can choose to use window inference, use this if you have very large images.
174
-
Please note that using too small a window might degrade performance, set the size appropriately.
184
+
Please note that using too small of a window might degrade performance, set the size appropriately.
175
185
176
186
You can also keep the dataset on the CPU to reduce memory usage, but this might slow down the inference process.
177
187
@@ -294,8 +304,3 @@ for the plots to work.
294
304
.. _notebooks folder of the repository: https://github.com/AdaptiveMotorControlLab/CellSeg3d/tree/main/notebooks
295
305
296
306
With this complete, you can repeat the workflow as needed.
Copy file name to clipboardExpand all lines: docs/res/welcome.rst
+23-21Lines changed: 23 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,13 +2,13 @@ Introduction
2
2
===================
3
3
4
4
5
-
Here you will find instructions on how to use the plug-in program.
5
+
Here you will find instructions on how to use the plugin for direct-to-3D segmentation.
6
6
If the installation was successful, you'll see the napari-cellseg3D plugin
7
7
in the Plugins section of napari.
8
8
9
9
This plugin was initially developed for the review of labeled cell volumes [#]_ from mice whole-brain samples
10
10
imaged by mesoSPIM microscopy [#]_ , and for training and using segmentation models from the MONAI project [#]_,
11
-
or any custom model written in Pytorch.
11
+
or any custom model written in PyTorch.
12
12
It should be adaptable to other tasks related to detection of 3D objects, as long as labels are available.
13
13
14
14
@@ -26,32 +26,34 @@ From this page you can access the guides on the several modules available for yo
26
26
* Defining custom models directly in the plugin (WIP) : :ref:`custom_model_guide`
27
27
28
28
29
-
Requirements
29
+
Installation
30
30
--------------------------------------------
31
31
32
-
.. important::
33
-
A **CUDA-capable GPU** is not needed but **very strongly recommended**, especially for training.
34
-
35
-
Requires installation of PyTorch and some optional dependencies of MONAI.
32
+
You can install `napari-cellseg3d` via [pip]:
36
33
37
-
* For PyTorch, please see `PyTorch's website`_ for installation instructions, with or without CUDA depending on your hardware.
34
+
``pip install napari-cellseg3d``
38
35
39
-
* If you get errors from MONAI regarding missing readers, please see `MONAI's optional dependencies`_ page for instructions on getting the readers required by your images.
A **CUDA-capable GPU** is not needed but **very strongly recommended**, especially for training.
48
45
49
-
``pip install napari-cellseg3d``
46
+
This package requires you have napari installed first.
50
47
51
-
For local installation, please run:
48
+
It also depends on PyTorch and some optional dependencies of MONAI. These come in the pip package above, but if
49
+
you need further assistance see below.
52
50
53
-
``pip install -e .``
51
+
* For help with PyTorch, please see `PyTorch's website`_ for installation instructions, with or without CUDA depending on your hardware.
54
52
53
+
* If you get errors from MONAI regarding missing readers, please see `MONAI's optional dependencies`_ page for instructions on getting the readers required by your images.
Then go into Plugins > napari-cellseg3d, and choose which tool to use.
66
+
Then go into Plugins > napari-cellseg3d, and choose which tool to use:
65
67
66
68
67
69
- **Train**: This module allows you to train segmentation algorithms from labeled volumes.
@@ -73,11 +75,12 @@ See above for links to detailed guides regarding the usage of the modules.
73
75
74
76
Acknowledgments & References
75
77
---------------------------------------------
76
-
This plugin has been developed by Cyril Achard and Maxime Vidalfor the `Mathis Laboratory of Adaptive Motor Control`_.
78
+
This plugin has been developed by Cyril Achard and Maxime Vidal, supervised by Mackenzie Mathis for the `Mathis Laboratory of Adaptive Motor Control`_.
77
79
78
80
We also greatly thank Timokleia Kousi for her contributions to this project and the `Wyss Center`_ for project funding.
79
81
80
-
The TRAILMAP models and original weights used here all originate from the `TRAILMAP project on GitHub`_ [1]_.
82
+
The TRAILMAP models and original weights used here were ported to PyTorch but originate from the `TRAILMAP project on GitHub`_ [1]_.
83
+
We also provide a model that was trained in-house on mesoSPIM nuclei data in collaboration with Dr. Stephane Pages and Timokleia Kousi.
81
84
82
85
This plugin mainly uses the following libraries and software:
83
86
@@ -88,7 +91,7 @@ This plugin mainly uses the following libraries and software:
88
91
* `MONAI project website`_ (various models used here are credited `on their website`_)
89
92
90
93
91
-
.. _Mathis Laboratory of adaptive motor control: http://www.mackenziemathislab.org/
94
+
.. _Mathis Laboratory of Adaptive Motor Control: http://www.mackenziemathislab.org/
92
95
.. _Wyss Center: https://wysscenter.ch/
93
96
.. _TRAILMAP project on GitHub: https://github.com/AlbertPun/TRAILMAP
94
97
.. _napari website: https://napari.org/
@@ -102,4 +105,3 @@ This plugin mainly uses the following libraries and software:
102
105
.. [#] Mapping mesoscale axonal projections in the mouse brain using a 3D convolutional network, Friedmann et al., 2020 ( https://pnas.org/cgi/doi/10.1073/pnas.1918465117 )
103
106
.. [#] The mesoSPIM initiative: open-source light-sheet microscopes for imaging cleared tissue, Voigt et al., 2019 ( https://doi.org/10.1038/s41592-019-0554-0 )
0 commit comments