Skip to content

Commit a6b6861

Browse files
committed
large docs update
1 parent 9c9f8a1 commit a6b6861

11 files changed

+112
-29
lines changed

docs/index.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ Welcome to napari-cellseg-annotator's documentation!
1010
res/inference_module_guide
1111
res/training_module_guide
1212
res/cropping_module_guide
13+
res/custom_model_template
1314

1415

1516
.. toctree::
@@ -24,6 +25,7 @@ Welcome to napari-cellseg-annotator's documentation!
2425
res/launch_review
2526
res/model_framework
2627
res/model_workers
28+
res/model_instance_seg
2729
res/plugin_model_inference
2830
res/plugin_model_training
2931
res/utils

docs/res/cropping_module_guide.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ you **change the position** of the cropped volumes and labels in the x,y and z p
4646

4747
.. note::
4848
When you are done you can save the cropped volume and labels directly with the
49-
**Quicksave** button on the lower left, which will save in the folder you picked the image from, or as
49+
**Quicksave** button on the lower left, which will save in the folder Fyou picked the image from, or as
5050
a separate folder if you loaded a folder as a stack.
5151
If you want more options (name, format) you can save by selecting the layer and then
5252
using **File -> Save selected layer**, or simply **CTRL+S** once you have selected the correct layer.

docs/res/custom_model_template.rst

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
.. _custom_model_guide:
2+
3+
Advanced : Declaring a custom model
4+
=============================================
5+
6+
To add a custom model, you will need a **.py** file with the following structure to be placed in the *src/napari_cellseg_annotator/models* folder:
7+
8+
9+
::
10+
11+
def get_net():
12+
return ModelClass # should return the class of the model,
13+
# for example SegResNet or UNET
14+
15+
16+
def get_weights_file():
17+
return "weights_file.pth" # name of the weights file for the model,
18+
# which should be in *src/napari_cellseg_annotator/models/saved_weights*
19+
20+
21+
def get_output(model, input):
22+
out = model(input) # should return the model's output as [C, N, D,H,W]
23+
# (C: channel, N, batch size, D,H,W : depth, height, width)
24+
return out
25+
26+
27+
def get_validation(model, val_inputs):
28+
val_outputs = model(val_inputs) # should return the proper type for validation
29+
# with single_window_inference from MONAI
30+
return val_outputs
31+
32+
33+
def ModelClass(x1,x2...):
34+
# your Pytorch model here...
35+
return results
36+
37+
664 KB
Loading
53 KB
Loading
-422 KB
Loading
692 KB
Loading

docs/res/inference_module_guide.rst

Lines changed: 38 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Model Link to original paper
1313
=========== ================================================================================================
1414
VNet `Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation`_
1515
SegResNet `3D MRI brain tumor segmentation using autoencoder regularization`_
16-
TRAILMAP An emulation in Pytorch of the `TRAIlMAP project on GitHub`_
16+
TRAILMAP An emulation in Pytorch of the `TRAILMAP project on GitHub`_
1717
=========== ================================================================================================
1818

1919
.. _Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation: https://arxiv.org/pdf/1606.04797.pdf
@@ -23,38 +23,58 @@ TRAILMAP An emulation in Pytorch of the `TRAIlMAP project on GitHub`_
2323
Interface and functionalities
2424
--------------------------------
2525

26-
When launching the module, you will be asked to provide an image folder containing all the volumes you'd like to be labeled.
27-
All images with the chosen (**.tif** or **.tiff** currently supported) extension in this folder will be labeled.
28-
You can then choose an output folder, where all the results will be saved.
26+
.. image:: images/inference_plugin_layout.png
27+
:align: right
2928

30-
.. note::
31-
| The files will be saved using the following format :
32-
| ``{original_name}_{model}_{date & time}_pred{id}.file_ext``
33-
| For example, using a VNet on the third image of a folder, called "volume_1.tif" will yield :
34-
| *volume_1_VNet_2022_04_06_15_49_42_pred3.tif*
29+
* Loading data : When launching the module, you will be asked to provide an image folder containing all the volumes you'd like to be labeled.
30+
All images with the chosen (**.tif** or **.tiff** currently supported) extension in this folder will be labeled.
31+
You can then choose an output folder, where all the results will be saved.
32+
33+
34+
35+
* Model choice : You can then choose one of the selected models above, which will be used for inference.
36+
37+
38+
39+
* Anisotropy :If you want to see your results without anisotropy when you have anisotropic images, you can specify that you have anisotropic data
40+
and set the resolution of your image in micron, this wil save & show the results without anisotropy.
41+
42+
43+
44+
* Thresholding : You can perform thresholding to binarize your labels, all values beneath the confidence threshold will be set to 0 using this.
45+
If you wish to use instance segmentation it is recommended to use threshlding.
3546

36-
You can then choose one of the selected models above, which will be used for inference.
3747

38-
If you want to see your results without anisotropy when you have anistropic images, you can specify that you have anisotropic data
39-
and set the resolution of your image in micron, this wil save & show the results without anisotropy.
4048

41-
You can perform thresholding to binarize your labels, all values beneath the confidence threshold will be set to 0 using this.
42-
If you wish to use instance segmentation it is recommended to use threshlding.
49+
* Viewing results : You can also select whether you'd like to see the results in napari afterwards; by default the first image processed will be displayed,
50+
but you can choose to display up to ten at once. You can also request to see the originals.
51+
52+
4353

44-
You can also select whether you'd like to see the results in napari afterwards; by default the first image processed will be displayed,
45-
but you can choose to display up to ten at once. You can also request to see the originals.
4654

4755
When you are done choosing your parameters, you can press the **Start** button to begin the inference process.
4856
Once it has finished, results will be saved then displayed in napari; each output will be paired with its original.
57+
On the left side, a progress bar and a log will keep you informed on the process.
58+
59+
60+
61+
.. note::
62+
| The files will be saved using the following format :
63+
| ``{original_name}_{model}_{date & time}_pred{id}.file_ext``
64+
| For example, using a VNet on the third image of a folder, called "somatomotor.tif" will yield the following name :
65+
| *somatomotor_VNet_2022_04_06_15_49_42_pred3.tif*
66+
4967

5068
.. hint::
51-
| **Results** will be displayed using the **twilight shifted** colormap if raw or **turbo** if thresholded,
52-
| whereas the **original** image will be shown in the **inferno** colormap.
69+
| **Results** will be displayed using the **twilight shifted** colormap if raw or **turbo** if thresholding has been applied, whereas the **original** image will be shown in the **inferno** colormap.
5370
| Feel free to change the **colormap** or **contrast** when viewing results to ensure you can properly see the labels.
5471
| You'll most likely want to use **3D view** and **grid mode** in napari when checking results more broadly.
5572
5673
.. image:: images/inference_results_example.png
5774

75+
.. note::
76+
You can save the log after the worker is finished to easily remember which parameters you ran inference with.
77+
5878
Source code
5979
--------------------------------
6080
* :doc:`plugin_model_inference`

docs/res/model_instance_seg.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
model_workers.py
1+
model_instance_seg.py
22
===========================================
33

44

docs/res/training_module_guide.rst

Lines changed: 32 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -6,21 +6,38 @@ Training module guide
66
This module allows you to train pre-defined Pytorch models for cell segmentation.
77
Pre-defined models are stored in napari-cellseg-annotator/models.
88

9+
.. important::
10+
The machine learning models used by this program require all images of a dataset to all be of the same size.
11+
Please ensure that all the images you are loading are of the **same size**, or to use the **"extract patches" (in augmentation tab)** with an appropriately small size
12+
to ensure all images being used are of a proper size.
13+
914
The training module is comprised of several tabs.
10-
The first one will let you choose :
15+
16+
17+
1) The first one, **Data**, will let you choose :
1118

1219
* The images folder
1320
* The labels folder
14-
* The model
15-
* The number of samples to extract from each of your image to ensure correct size and perform data augmentation.
1621

17-
The second lets you define training parameters such as :
22+
2) The second tab, **Augmentation**, lets you define dataset and augmentation parameters such as :
23+
24+
* Whether to use images "as is" (**requires all images to be of the same size and cubic**) or extract patches.
25+
26+
* If you're extracting patches :
27+
* The size of patches to be extracted (ideally, please use a value **close to a pwoer of two**, such as 120 or 60.
28+
* The number of samples to extract from each of your image to ensure correct size and perform data augmentation. A larger number will likely mean better performances, but longer training and larger memory usage.
29+
* Whether to perform data augmentation or not (elastic deforms, intensity shifts. random flipping,etc). A rule of thumb for augmentation is :
30+
* If you're using the patch extraction method, enable it if you have more than 10 samples per image with at least 5 images
31+
* If you have a large dataset and are not using patches extraction, enable it.
1832

19-
* The loss function used for training
20-
* The batch size
21-
* The number of epochs
33+
34+
3) The third contains training related parameters :
35+
* The model to use for training
36+
* The loss function used for training (see table below)
37+
* The batch size (larger means quicker training and possibly better performance but increased memory usage)
38+
* The number of epochs (a possibility is to start with 60 epochs, and decrease or increase depending on performance.)
2239
* The epoch interval for validation (for example, if set to two, the module will use the validation dataset to evaluate the model with the dice metric every two epochs.)
23-
If the dice metric is better on that validation interval, the model weights will be saved in the results folder.
40+
If the dice metric is better on that validation interval, the model weights will be saved in the results folder.
2441

2542
The available loss functions are :
2643

@@ -43,12 +60,18 @@ Tversky loss `Tversky Loss from MONAI`_ with ``sigmoid=true``
4360
.. _Tversky Loss from MONAI: https://docs.monai.io/en/stable/losses.html#tverskyloss
4461

4562
Once you are ready, press the Start button to begin training. The module will automatically load your dataset,
46-
perform data augmentation, select a CUDA device if one is present, and train the model.
63+
perform data augmentation if you chose to, select a CUDA device if one is present, and train the model.
4764

4865
.. note::
4966
You can stop the training at any time by clicking on the start button again.
5067

5168
**The training will stop after the next validation interval is performed, to save the latest model should it be better.**
5269

70+
.. note::
71+
You can save the log to record the losses and validation metrics numerical value at each step. This log is autosaved as well when training completes.
72+
5373
After two validations steps have been performed, the training loss values and validation metrics will be automatically plotted
5474
and shown on napari every time a validation step completes.
75+
This plot automatically saved each time validation is performed for now. The final version is stored separately in the results folder.
76+
77+

0 commit comments

Comments
 (0)