You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/guide-qupath-objects.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,15 +48,15 @@ Then, choose the following options :
48
48
## Detect objects
49
49
### Built-in cell detection
50
50
51
-
QuPath has a built-in cell detection feature, available in `Analyze > Cell detection`. You hava a full tutorial in the [official documentation](https://qupath.readthedocs.io/en/stable/docs/tutorials/cell_detection.html).
51
+
QuPath has a built-in cell detection feature, available in `Analyze > Cell detection`. You have a full tutorial in the [official documentation](https://qupath.readthedocs.io/en/stable/docs/tutorials/cell_detection.html).
52
52
53
53
Briefly, this uses a watershed algorithm to find bright spots and can perform a cell expansion to estimate the full cell shape based on the detected nuclei. Therefore, this works best to segment nuclei but one can expect good performance for cells as well, depending on the imaging and staining conditions.
54
54
55
55
!!! tip
56
56
In `scripts/qupath-utils/segmentation`, there is `watershedDetectionFilters.groovy` which uses this feature from a script. It further allows you to filter out detected cells based on shape measurements as well as fluorescence itensity in several channels and cell compartments.
57
57
58
58
### Pixel classifier
59
-
Another very powerful and versatile way to segment cells if through machine learning. Note the term "machine" and not "deep" as it relies on statistics theory from the 1980s. QuPath provides an user-friendly interface to that, similar to what [ilastik](https://www.ilastik.org/) provides.
59
+
Another very powerful and versatile way to segment cells is through machine learning. Note the term "machine" and not "deep" as it relies on statistics theory from the 1980s. QuPath provides an user-friendly interface to that, similar to what [ilastik](https://www.ilastik.org/) provides.
60
60
61
61
The general idea is to train a model to classify every pixel as a signal or as background. You can find good resources on how to procede in the [official documentation](https://qupath.readthedocs.io/en/stable/docs/tutorials/pixel_classification.html) and some additionnal tips and tutorials on Michael Neslon's blog ([here](https://www.imagescientist.com/mpx-pixelclassifier) and [here](https://www.imagescientist.com/brightfield-4-pixel-classifier)).
62
62
@@ -69,13 +69,13 @@ First and foremost, you should use a QuPath project dedicated to the training of
69
69
70
70
1. You should choose some images from different animals, with different imaging conditions (staining efficiency and LED intensity) in different regions (eg. with different objects' shape, size, sparsity...). The goal is to get the most diversity of objects you could encounter in your experiments. 10 images is more than enough !
71
71
2. Import those images to the new, dedicated QuPath project.
72
-
3. Create the classifications you'll need, "Cells: marker+" for example. The "Ignore*" classification is used for the background.
72
+
3. Create the classifications you'll need, "Cells: marker+" for example. The "Ignore*" classification is used for the background.
73
73
4. Head to `Classify > Pixel classification > Train pixel classifier`, and turn on `Live prediction`.
74
74
5. Load all your images in `Load training`.
75
75
6. In `Advanced settings`, check `Reweight samples` to help make sure a classification is not over-represented.
76
76
7. Modify the different parameters :
77
77
+`Classifier` : typically, `RTrees` or `ANN_MLP`. This can be changed dynamically afterwards to see which works best for you.
78
-
+`Resolution` : this is the pixel size used. This is a trade-off between accuracy and speed. If your objects are only composed of a few pixels, you'll the full resolution, for big objects reducing the resolution will be faster.
78
+
+`Resolution` : this is the pixel size used. This is a trade-off between accuracy and speed. If your objects are only composed of a few pixels, you'll want the full resolution, for big objects reducing the resolution will be faster.
79
79
+`Features` : this is the core of the process -- where you choose the filters. In `Edit`, you'll need to choose :
80
80
- The fluorescence channels
81
81
- The scales, eg. the size of the filters applied to the image. The bigger, the coarser the filter is. Again, this will depend on the size of the objects you want to segment.
@@ -108,7 +108,7 @@ You will first need to export those with the `exportPixelClassifierProbabilities
108
108
109
109
Then the segmentation script can :
110
110
111
-
+ find punctal objects as polygons (with a shape) or points (punctal) than can be counted.
111
+
+ find punctual objects as polygons (with a shape) or points (punctual) that can be counted.
112
112
+ trace fibers with skeletonization to create lines whose lengths can be measured.
113
113
114
114
Several parameters have to be specified by the user, see the segmentation script [API reference](api-script-segment.md). This script will generate [GeoJson](tips-formats.md#json-and-geojson-files) files that can be imported back to QuPath with the `importGeojsonFiles.groovy` script.
0 commit comments