You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-2Lines changed: 3 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -151,8 +151,9 @@ Distributed under the terms of the [MIT] license.
151
151
152
152
## Acknowledgements
153
153
154
-
This plugin was developed by Cyril Achard, Maxime Vidal, Mackenzie Mathis. This work was funded, in part, from the Wyss Center to the [Mathis Laboratory of Adaptive Motor Control](https://www.mackenziemathislab.org/).
155
-
154
+
This plugin was developed by Cyril Achard, Maxime Vidal, Mackenzie Mathis.
155
+
This work was funded, in part, from the Wyss Center to the [Mathis Laboratory of Adaptive Motor Control](https://www.mackenziemathislab.org/).
156
+
Please refer to the documentation for full acknowledgements.
156
157
157
158
## Plugin base
158
159
This [napari] plugin was generated with [Cookiecutter] using [@napari]'s [cookiecutter-napari-plugin] template.
Copy file name to clipboardExpand all lines: docs/res/guides/cropping_module_guide.rst
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,9 +33,9 @@ If you'd like to change the size of the volume, change the parameters as previou
33
33
34
34
Creating new layers
35
35
---------------------------------
36
-
To "zoom in" your volume, you can use the "Create new layers" checkbox to make a new layer not controlled by the plugin next
37
-
time you hit Start. This way, you can first select your region of interest by using the tool as described above,
38
-
the enable the option, select the cropped layer, and define a smaller crop size to have easier access to your region of interest.
36
+
To "zoom in" your volume, you can use the "Create new layers" checkbox to make a new cropping layer controlled by the sliders
37
+
next time you hit Start. This way, you can first select your region of interest by using the tool as described above,
38
+
then enable the option, select the cropped region produced before as the input layer, and define a smaller crop size in order to crop within your region of interest.
Copy file name to clipboardExpand all lines: docs/res/guides/utils_module_guide.rst
+9-1Lines changed: 9 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,13 +4,21 @@ Label conversion utility guide
4
4
==================================
5
5
6
6
This utility will let you convert labels to various different formats.
7
+
7
8
You will have to specify the results directory for saving; afterwards you can run each action on a folder or on the currently selected layer.
8
9
9
10
You can :
10
11
12
+
* Crop 3D volumes :
13
+
Please refer to :ref:`cropping_module_guide` for a guide on using the cropping utility.
14
+
11
15
* Convert to instance labels :
12
-
This will convert 0/1 semantic labels to instance label, with a unique ID for each object using the watershed method.
16
+
This will convert 0/1 semantic labels to instance label, with a unique ID for each object.
17
+
The available methods for this are :
13
18
19
+
* Connected components : simple method that will assign a unique ID to each connected component. Does not work well for touching objects (objects will often be fused), works for anisotropic volumes.
20
+
* Watershed : method based on topographic maps. Works well for touching objects and anisotropic volumes; touching objects may be fused.
21
+
* Voronoi-Otsu : method based on Voronoi diagrams. Works well for touching objects but only for isotropic volumes.
14
22
* Convert to semantic labels :
15
23
This will convert instance labels with unique IDs per object into 0/1 semantic labels, for example for training.
Copy file name to clipboardExpand all lines: docs/res/welcome.rst
+23-13Lines changed: 23 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,22 +38,28 @@ You can install `napari-cellseg3d` via [pip]:
38
38
39
39
``pip install napari-cellseg3d``
40
40
41
-
For local installation, please run:
41
+
For local installation after cloning, please run in the CellSeg3D folder:
42
42
43
43
``pip install -e .``
44
44
45
45
Requirements
46
46
--------------------------------------------
47
47
48
+
.. note::
49
+
A **CUDA-capable GPU** is not needed but **very strongly recommended**, especially for training and possibly inference.
50
+
48
51
.. important::
49
-
A **CUDA-capable GPU** is not needed but **very strongly recommended**, especially for training.
52
+
This package requires you have napari installed with PyQt5 or PySide2 first.
53
+
If you do not have a Qt backend you can use :
50
54
51
-
This package requires you have napari installed first.
55
+
``pip install napari-cellseg3d[all]``
56
+
to install PyQt5 by default.
52
57
53
-
It also depends on PyTorch and some optional dependencies of MONAI. These come in the pip package above, but if
58
+
It also depends on PyTorch and some optional dependencies of MONAI. These come in the pip package as requirements, but if
54
59
you need further assistance see below.
55
60
56
61
* For help with PyTorch, please see `PyTorch's website`_ for installation instructions, with or without CUDA depending on your hardware.
62
+
Depending on your setup, you might wish to install torch first.
57
63
58
64
* If you get errors from MONAI regarding missing readers, please see `MONAI's optional dependencies`_ page for instructions on getting the readers required by your images.
59
65
@@ -70,14 +76,13 @@ To use the plugin, please run:
70
76
71
77
Then go into Plugins > napari-cellseg3d, and choose which tool to use:
72
78
73
-
74
79
- **Review**: This module allows you to review your labels, from predictions or manual labeling, and correct them if needed. It then saves the status of each file in a csv, for easier monitoring
75
80
- **Inference**: This module allows you to use pre-trained segmentation algorithms on volumes to automatically label cells
76
81
- **Training**: This module allows you to train segmentation algorithms from labeled volumes
77
82
- **Utilities**: This module allows you to use several utilities, e.g. to crop your volumes and labels, compute prediction scores or convert labels
78
83
- **Help/About...** : Quick access to version info, Github page and docs
79
84
80
-
See above for links to detailed guides regarding the usage of the modules.
85
+
See the documentation for links to detailed guides regarding the usage of the modules.
81
86
82
87
Acknowledgments & References
83
88
---------------------------------------------
@@ -90,24 +95,29 @@ We also provide a model that was trained in-house on mesoSPIM nuclei data in col
90
95
91
96
This plugin mainly uses the following libraries and software:
92
97
93
-
* `napari website`_
98
+
* `napari`_
94
99
95
-
* `PyTorch website`_
100
+
* `PyTorch`_
96
101
97
-
* `MONAI project website`_ (various models used here are credited `on their website`_)
102
+
* `MONAI project`_ (various models used here are credited `on their website`_)
98
103
104
+
* `pyclEsperanto`_ (for the Voronoi Otsu labeling) by Robert Haase
105
+
106
+
* A custom re-implementation of the `WNet model`_ by Xia and Kulis [#]_
99
107
100
108
.. _Mathis Laboratory of Adaptive Motor Control: http://www.mackenziemathislab.org/
101
109
.. _Wyss Center: https://wysscenter.ch/
102
110
.. _TRAILMAP project on GitHub: https://github.com/AlbertPun/TRAILMAP
103
-
.. _napari website: https://napari.org/
104
-
.. _PyTorch website: https://pytorch.org/
105
-
.. _MONAI project website: https://monai.io/
111
+
.. _napari: https://napari.org/
112
+
.. _PyTorch: https://pytorch.org/
113
+
.. _MONAI project: https://monai.io/
106
114
.. _on their website: https://docs.monai.io/en/stable/networks.html#nets
.. [#] Mapping mesoscale axonal projections in the mouse brain using a 3D convolutional network, Friedmann et al., 2020 ( https://pnas.org/cgi/doi/10.1073/pnas.1918465117 )
112
121
.. [#] The mesoSPIM initiative: open-source light-sheet microscopes for imaging cleared tissue, Voigt et al., 2019 ( https://doi.org/10.1038/s41592-019-0554-0 )
0 commit comments