Skip to content

Commit 4e3616d

Browse files
authored
Merge pull request #4 from AdaptiveMotorControlLab/mwm/updateDocs
Mwm/update docs [WIP]
2 parents a985102 + 8441003 commit 4e3616d

File tree

2 files changed

+52
-45
lines changed

2 files changed

+52
-45
lines changed

docs/res/guides/detailed_walkthrough.rst

Lines changed: 29 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -3,20 +3,27 @@
33
Detailed walkthrough
44
===================================
55

6-
The following guide will show in details how to use the plugin's workflow, starting from a large labeled volume.
6+
The following guide will show you how to use the plugin's workflow, starting from human-labeled annotation volume, to running inference on novel volumes.
77

88
Preparing images and labels
99
-------------------------------
1010

11-
To get started with the entire workflow, you'll need at least one pair of image and corresponding labels;
11+
CellSeg3D was designed for cleared-brain tissue data (collected on mesoSPIM ligthsheet systems). Specifically, we provide a series
12+
of deep learning models that we have found to work well on cortical whole-neuron data. We also provide support for MONAI models, and
13+
we have ported TRAILMAP to PyTorch and trained the model on mesoSPIM collected data. We provide all the tooling for you to use these
14+
weights and also perform transfer learning by fine-tuning the model(s) on your data for even better performance!
15+
16+
To get started with the entire workflow (i.e., fine-tuning on your data), you'll need at least one pair of image and corresponding labels;
1217
let's assume you have part of a cleared brain from mesoSPIM imaging as a large .tif file.
1318

19+
If you want to test the models "as is", please see "Inference" sections in our docs.
20+
1421

1522
.. figure:: ../images/init_image_labels.png
1623
:scale: 40 %
1724
:align: center
1825

19-
Example of an anisotropic volume and its associated labels.
26+
Example of an anisotropic volume (i.e., often times the z resolution is not the same as x and y) and its associated labels.
2027

2128

2229
.. note::
@@ -29,7 +36,7 @@ Cropping
2936
*****************
3037

3138
To reduce memory requirements and build a dataset from a single, large volume,
32-
you can use the **cropping** tool to extract multiple smaller images from a large volume.
39+
you can use the **cropping** tool to extract multiple smaller images from a large volume for training.
3340

3441
Simply load your image and labels (by checking the "Crop labels simultaneously" option),
3542
and select the volume size you desire to use.
@@ -75,24 +82,24 @@ Models for object detection
7582
Training
7683
*****************
7784

78-
If you have a dataset of reasonably sized images with semantic labels, you're all set !
79-
First of all, load your data by inputting the paths to images and labels, as well as where you want the results to be saved.
80-
81-
There are a few more options on this tab :
85+
If you have a dataset of reasonably sized images (see cropping above) with semantic labels, you're all set to proceed!
86+
First, load your data by inputting the paths to images and labels, as well as where you want the results to be saved.
8287

83-
* Save as zip : simply copies the results in a zip archive for easier transfer
88+
There are a few more options on this tab:
8489

8590
* Transfer weights : you can start the model with our pre-trained weights if your dataset comes from cleared brain tissue
86-
image by mesoSPIM as well. If you have your own weights for the provided models, you can also choose to load them by
87-
checking the related option; simply make sure they are compatible with the model.
91+
imaged by a mesoSPIM (or other lightsheet). If you have your own weights for the provided models, you can also choose to load them by
92+
checking the related option; simply make sure they are compatible with the model you selected.
8893

8994
To import your own model, see : :ref:`custom_model_guide`, please note this is still a WIP.
9095

9196
* Validation proportion : the percentage is listed is how many images will be used for training versus validation.
9297
Validation can work with as little as one image, however performance will greatly improve the more images there are.
93-
Use 90% only if you have a very small dataset (less than 5 images)
98+
Use 90% only if you have a very small dataset (less than 5 images).
9499

95-
With this, we can switch to the next tab : data augmentation.
100+
* Save as zip : simply copies the results in a zip archive for easier transfer.
101+
102+
Now, we can switch to the next tab : data augmentation.
96103

97104
If you have cropped cubic images with a power of two as the edge length, you do not need to extract patches,
98105
your images are usable as is.
@@ -109,11 +116,14 @@ In most cases this should left enabled.
109116

110117
Finally, the last tab lets you choose :
111118

112-
* The model
119+
* The models
113120

114-
* SegResNet is a lightweight model (low memory requirements) with decent performance.
115-
* TRAILMAP is a recent model trained for axonal detection in cleared tissue; use it if your dataset is similar
116-
* VNet is a possibly more performant model than SegResnet but requires much more memory
121+
* SegResNet is a lightweight model (low memory requirements) from MONAI originally designed for 3D fMRI data.
122+
* VNet is a heavier (than SegResNet) CNN from MONAI designed for medical image segmentation.
123+
* TRAILMAP is our PyTorch implementation of a 3D CNN model trained for axonal detection in cleared tissue.
124+
* TRAILMAP-MS is our implementation in PyTorch additionally trained on mouse cortical neural nuclei from mesoSPIM data.
125+
* Note, the code is very modular, so it is relatively straightforward to use (and contribute) your model as well.
126+
117127

118128
* The loss : for object detection in 3D volumes you'll likely want to use the Dice or Dice-focal Loss.
119129

@@ -161,7 +171,7 @@ To start, simply choose which folder of images you'd like to run inference on, t
161171
Then, select the model you trained (see note below for SegResNet), and load your weights from training.
162172

163173
.. note::
164-
If you trained a SegResNet, set the counter below the model choice to the size of the images you trained the model on.
174+
If you already trained a SegResNet, set the counter below the model choice to the size of the images you trained the model on.
165175
(Either use the size of the image itself if you did not extract patches, or the size of the nearest superior power of two of the patches you extracted)
166176

167177
Example :
@@ -171,7 +181,7 @@ Then, select the model you trained (see note below for SegResNet), and load your
171181

172182

173183
Next, you can choose to use window inference, use this if you have very large images.
174-
Please note that using too small a window might degrade performance, set the size appropriately.
184+
Please note that using too small of a window might degrade performance, set the size appropriately.
175185

176186
You can also keep the dataset on the CPU to reduce memory usage, but this might slow down the inference process.
177187

@@ -294,8 +304,3 @@ for the plots to work.
294304
.. _notebooks folder of the repository: https://github.com/AdaptiveMotorControlLab/CellSeg3d/tree/main/notebooks
295305

296306
With this complete, you can repeat the workflow as needed.
297-
298-
299-
300-
301-

docs/res/welcome.rst

Lines changed: 23 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,13 @@ Introduction
22
===================
33

44

5-
Here you will find instructions on how to use the plug-in program.
5+
Here you will find instructions on how to use the plugin for direct-to-3D segmentation.
66
If the installation was successful, you'll see the napari-cellseg3D plugin
77
in the Plugins section of napari.
88

99
This plugin was initially developed for the review of labeled cell volumes [#]_ from mice whole-brain samples
1010
imaged by mesoSPIM microscopy [#]_ , and for training and using segmentation models from the MONAI project [#]_,
11-
or any custom model written in Pytorch.
11+
or any custom model written in PyTorch.
1212
It should be adaptable to other tasks related to detection of 3D objects, as long as labels are available.
1313

1414

@@ -26,32 +26,34 @@ From this page you can access the guides on the several modules available for yo
2626
* Defining custom models directly in the plugin (WIP) : :ref:`custom_model_guide`
2727

2828

29-
Requirements
29+
Installation
3030
--------------------------------------------
3131

32-
.. important::
33-
A **CUDA-capable GPU** is not needed but **very strongly recommended**, especially for training.
34-
35-
Requires installation of PyTorch and some optional dependencies of MONAI.
32+
You can install `napari-cellseg3d` via [pip]:
3633

37-
* For PyTorch, please see `PyTorch's website`_ for installation instructions, with or without CUDA depending on your hardware.
34+
``pip install napari-cellseg3d``
3835

39-
* If you get errors from MONAI regarding missing readers, please see `MONAI's optional dependencies`_ page for instructions on getting the readers required by your images.
36+
For local installation, please run:
4037

41-
.. _MONAI's optional dependencies: https://docs.monai.io/en/stable/installation.html#installing-the-recommended-dependencies
42-
.. _PyTorch's website: https://pytorch.org/get-started/locally/
38+
``pip install -e .``
4339

44-
Installation
40+
Requirements
4541
--------------------------------------------
4642

47-
You can install `napari-cellseg3d` via [pip]:
43+
.. important::
44+
A **CUDA-capable GPU** is not needed but **very strongly recommended**, especially for training.
4845

49-
``pip install napari-cellseg3d``
46+
This package requires you have napari installed first.
5047

51-
For local installation, please run:
48+
It also depends on PyTorch and some optional dependencies of MONAI. These come in the pip package above, but if
49+
you need further assistance see below.
5250

53-
``pip install -e .``
51+
* For help with PyTorch, please see `PyTorch's website`_ for installation instructions, with or without CUDA depending on your hardware.
5452

53+
* If you get errors from MONAI regarding missing readers, please see `MONAI's optional dependencies`_ page for instructions on getting the readers required by your images.
54+
55+
.. _MONAI's optional dependencies: https://docs.monai.io/en/stable/installation.html#installing-the-recommended-dependencies
56+
.. _PyTorch's website: https://pytorch.org/get-started/locally/
5557

5658

5759
Usage
@@ -61,7 +63,7 @@ To use the plugin, please run:
6163

6264
``napari``
6365

64-
Then go into Plugins > napari-cellseg3d, and choose which tool to use.
66+
Then go into Plugins > napari-cellseg3d, and choose which tool to use:
6567

6668

6769
- **Train**: This module allows you to train segmentation algorithms from labeled volumes.
@@ -73,11 +75,12 @@ See above for links to detailed guides regarding the usage of the modules.
7375

7476
Acknowledgments & References
7577
---------------------------------------------
76-
This plugin has been developed by Cyril Achard and Maxime Vidal for the `Mathis Laboratory of Adaptive Motor Control`_.
78+
This plugin has been developed by Cyril Achard and Maxime Vidal, supervised by Mackenzie Mathis for the `Mathis Laboratory of Adaptive Motor Control`_.
7779

7880
We also greatly thank Timokleia Kousi for her contributions to this project and the `Wyss Center`_ for project funding.
7981

80-
The TRAILMAP models and original weights used here all originate from the `TRAILMAP project on GitHub`_ [1]_.
82+
The TRAILMAP models and original weights used here were ported to PyTorch but originate from the `TRAILMAP project on GitHub`_ [1]_.
83+
We also provide a model that was trained in-house on mesoSPIM nuclei data in collaboration with Dr. Stephane Pages and Timokleia Kousi.
8184

8285
This plugin mainly uses the following libraries and software:
8386

@@ -88,7 +91,7 @@ This plugin mainly uses the following libraries and software:
8891
* `MONAI project website`_ (various models used here are credited `on their website`_)
8992

9093

91-
.. _Mathis Laboratory of adaptive motor control: http://www.mackenziemathislab.org/
94+
.. _Mathis Laboratory of Adaptive Motor Control: http://www.mackenziemathislab.org/
9295
.. _Wyss Center: https://wysscenter.ch/
9396
.. _TRAILMAP project on GitHub: https://github.com/AlbertPun/TRAILMAP
9497
.. _napari website: https://napari.org/
@@ -102,4 +105,3 @@ This plugin mainly uses the following libraries and software:
102105
.. [#] Mapping mesoscale axonal projections in the mouse brain using a 3D convolutional network, Friedmann et al., 2020 ( https://pnas.org/cgi/doi/10.1073/pnas.1918465117 )
103106
.. [#] The mesoSPIM initiative: open-source light-sheet microscopes for imaging cleared tissue, Voigt et al., 2019 ( https://doi.org/10.1038/s41592-019-0554-0 )
104107
.. [#] MONAI Project website ( https://monai.io/ )
105-

0 commit comments

Comments
 (0)