Skip to content

Commit 4d83451

Browse files
committed
2 parents 4affca7 + 816e375 commit 4d83451

26 files changed

+417
-159
lines changed

.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ venv/
9090

9191
########
9292
#project specific
93-
#dataset
93+
#dataset, weights, old logos
9494
/src/napari_cellseg3d/models/dataset/
9595
/src/napari_cellseg3d/models/saved_weights/
9696
/docs/res/logo/old_logo/

docs/conf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@
136136

137137
# The name of an image file (relative to this directory) to place at the top
138138
# of the sidebar.
139-
html_logo = "res/logo/logo_alpha.png"
139+
html_logo = "res/logo/logo_diag.png"
140140

141141
# The name of an image file (within the static path) to use as favicon of the
142142
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32

docs/res/guides/training_module_guide.rst

Lines changed: 21 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -23,9 +23,12 @@ TRAILMAP An emulation of the `TRAILMAP project on GitHub`_ using `3DUne
2323
.. _3DUnet for Pytorch: https://github.com/wolny/pytorch-3dunet
2424

2525
.. important::
26-
The machine learning models used by this program require all images of a dataset to be of the same size.
27-
Please ensure that all the images you are loading are of the **same size**, or to use the **"extract patches" (in augmentation tab)** with an appropriately small size
28-
to ensure all images being used by the model are of a workable size.
26+
| The machine learning models used by this program require all images of a dataset to be of the same size.
27+
| Please ensure that all the images you are loading are of the **same size**, or to use the **"extract patches" (in augmentation tab)** with an appropriately small size to ensure all images being used by the model are of a workable size.
28+
29+
.. important::
30+
| **All image sizes used should be as close to a power of two as possible, if not a power of two.**
31+
| Images are automatically padded; a 64 pixels cube will be used as is, but a 65 pixel cube will be padded up to 128 pixels, resulting in much higher memory use.
2932
3033
The training module is comprised of several tabs.
3134

@@ -41,36 +44,39 @@ The training module is comprised of several tabs.
4144
* Whether to use images "as is" (**requires all images to be of the same size and cubic**) or extract patches.
4245

4346
* If you're extracting patches :
44-
* The size of patches to be extracted (ideally, please use a value **close to a power of two**, such as 120 or 60.
45-
* The number of samples to extract from each of your image to ensure correct size and perform data augmentation. A larger number will likely mean better performances, but longer training and larger memory usage.
47+
48+
* The size of patches to be extracted (ideally, please use a value **close to a power of two**, such as 120 or 60 to ensure correct size.)
49+
* The number of samples to extract from each of your images. A larger number will likely mean better performances, but longer training and larger memory usage.
50+
4651
* Whether to perform data augmentation or not (elastic deforms, intensity shifts. random flipping,etc). A rule of thumb for augmentation is :
52+
4753
* If you're using the patch extraction method, enable it if you are using more than 10 samples per image with at least 5 images
4854
* If you have a large dataset and are not using patches extraction, enable it.
4955

5056

5157
3) The third contains training related parameters :
5258

53-
* The model to use for training (see table above)
54-
* The loss function used for training (see table below)
55-
* The batch size (larger means quicker training and possibly better performance but increased memory usage)
56-
* The number of epochs (a possibility is to start with 60 epochs, and decrease or increase depending on performance.)
57-
* The epoch interval for validation (for example, if set to two, the module will use the validation dataset to evaluate the model with the dice metric every two epochs.)
59+
* The **model** to use for training (see table above)
60+
* The **loss function** used for training (see table below)
61+
* The **batch size** (larger means quicker training and possibly better performance but increased memory usage)
62+
* The **number of epochs** (a possibility is to start with 60 epochs, and decrease or increase depending on performance.)
63+
* The **epoch interval** for validation (for example, if set to two, the module will use the validation dataset to evaluate the model with the dice metric every two epochs.)
5864

59-
If the dice metric is better on that validation interval, the model weights will be saved in the results folder.
65+
.. note::
66+
If the dice metric is better on a given validation interval, the model weights will be saved in the results folder.
6067

6168
The available loss functions are :
6269

63-
======================== ====================================================
70+
======================== ================================================================================================
6471
Function Reference
65-
======================== ====================================================
72+
======================== ================================================================================================
6673
Dice loss `Dice Loss from MONAI`_ with ``sigmoid=true``
6774
Focal loss `Focal Loss from MONAI`_
6875
Dice-Focal loss `Dice-focal Loss from MONAI`_ with ``sigmoid=true`` and ``lambda_dice = 0.5``
6976
Generalized Dice loss `Generalized dice Loss from MONAI`_ with ``sigmoid=true``
7077
Dice-CE loss `Dice-CE Loss from MONAI`_ with ``sigmoid=true``
7178
Tversky loss `Tversky Loss from MONAI`_ with ``sigmoid=true``
72-
======================== ====================================================
73-
79+
======================== ================================================================================================
7480
.. _Dice Loss from MONAI: https://docs.monai.io/en/stable/losses.html#diceloss
7581
.. _Focal Loss from MONAI: https://docs.monai.io/en/stable/losses.html#focalloss
7682
.. _Dice-focal Loss from MONAI: https://docs.monai.io/en/stable/losses.html#dicefocalloss

docs/res/logo/logo_alpha.png

121 KB
Loading

docs/res/logo/logo_alpha_flat.png

-242 KB
Binary file not shown.

docs/res/logo/logo_diag.png

276 KB
Loading

docs/res/logo/old_logo_alpha.png

-268 KB
Binary file not shown.

docs/res/welcome.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -68,6 +68,7 @@ Then go into Plugins > napari-cellseg3d, and choose which tool to use.
6868
- **Review**: This module allows you to review your labels, from predictions or manual labeling, and correct them if needed. It then saves the status of each file in a csv, for easier monitoring.
6969
- **Utilities**: This module allows you to use several utilities, e.g. to crop your volumes and labels, compute prediction scores or convert labels
7070

71+
See above for links to detailed guides regarding the usage of the modules.
7172

7273
Acknowledgments & References
7374
---------------------------------------------

src/napari_cellseg3d/_tests/test_dock_widget.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@
77

88

99
def test_prepare(make_napari_viewer):
10-
path_to_csv = Path(os.path.dirname(os.path.realpath(__file__)) + "/res")
1110
path_image = Path(
1211
os.path.dirname(os.path.realpath(__file__)) + "/res/test.tif"
1312
)
@@ -16,7 +15,7 @@ def test_prepare(make_napari_viewer):
1615
viewer.add_image(image)
1716
widget = Datamanager(viewer)
1817

19-
widget.prepare(path_to_csv, ".tif", "", False, False)
18+
widget.prepare(path_image, ".tif", "", False, False)
2019

2120
assert widget.filetype == ".tif"
2221
assert widget.as_folder == False

src/napari_cellseg3d/_tests/test_utils.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,13 @@ def test_get_padding_dim():
6767

6868
assert pad == [2048, 32, 64]
6969

70+
tensor = torch.randn(65,70,80)
71+
size = tensor.size()
72+
73+
pad = utils.get_padding_dim(size)
74+
75+
assert pad == [128,128,128]
76+
7077

7178
def test_normalize_x():
7279

0 commit comments

Comments
 (0)