Skip to content

Commit b870b50

Browse files
Merge branch 'main' of git@github.com:TeamNCMC/histoquant.git
2 parents bc578e0 + c80abb7 commit b870b50

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

docs/guide-qupath-objects.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -48,15 +48,15 @@ Then, choose the following options :
4848
## Detect objects
4949
### Built-in cell detection
5050

51-
QuPath has a built-in cell detection feature, available in `Analyze > Cell detection`. You hava a full tutorial in the [official documentation](https://qupath.readthedocs.io/en/stable/docs/tutorials/cell_detection.html).
51+
QuPath has a built-in cell detection feature, available in `Analyze > Cell detection`. You have a full tutorial in the [official documentation](https://qupath.readthedocs.io/en/stable/docs/tutorials/cell_detection.html).
5252

5353
Briefly, this uses a watershed algorithm to find bright spots and can perform a cell expansion to estimate the full cell shape based on the detected nuclei. Therefore, this works best to segment nuclei but one can expect good performance for cells as well, depending on the imaging and staining conditions.
5454

5555
!!! tip
5656
In `scripts/qupath-utils/segmentation`, there is `watershedDetectionFilters.groovy` which uses this feature from a script. It further allows you to filter out detected cells based on shape measurements as well as fluorescence itensity in several channels and cell compartments.
5757

5858
### Pixel classifier
59-
Another very powerful and versatile way to segment cells if through machine learning. Note the term "machine" and not "deep" as it relies on statistics theory from the 1980s. QuPath provides an user-friendly interface to that, similar to what [ilastik](https://www.ilastik.org/) provides.
59+
Another very powerful and versatile way to segment cells is through machine learning. Note the term "machine" and not "deep" as it relies on statistics theory from the 1980s. QuPath provides an user-friendly interface to that, similar to what [ilastik](https://www.ilastik.org/) provides.
6060

6161
The general idea is to train a model to classify every pixel as a signal or as background. You can find good resources on how to procede in the [official documentation](https://qupath.readthedocs.io/en/stable/docs/tutorials/pixel_classification.html) and some additionnal tips and tutorials on Michael Neslon's blog ([here](https://www.imagescientist.com/mpx-pixelclassifier) and [here](https://www.imagescientist.com/brightfield-4-pixel-classifier)).
6262

@@ -69,13 +69,13 @@ First and foremost, you should use a QuPath project dedicated to the training of
6969

7070
1. You should choose some images from different animals, with different imaging conditions (staining efficiency and LED intensity) in different regions (eg. with different objects' shape, size, sparsity...). The goal is to get the most diversity of objects you could encounter in your experiments. 10 images is more than enough !
7171
2. Import those images to the new, dedicated QuPath project.
72-
3. Create the classifications you'll need, "Cells: marker+" for example. The "Ignore*" classification is used for the background.
72+
3. Create the classifications you'll need, "Cells: marker+" for example. The "Ignore*" classification is used for the background.
7373
4. Head to `Classify > Pixel classification > Train pixel classifier`, and turn on `Live prediction`.
7474
5. Load all your images in `Load training`.
7575
6. In `Advanced settings`, check `Reweight samples` to help make sure a classification is not over-represented.
7676
7. Modify the different parameters :
7777
+ `Classifier` : typically, `RTrees` or `ANN_MLP`. This can be changed dynamically afterwards to see which works best for you.
78-
+ `Resolution` : this is the pixel size used. This is a trade-off between accuracy and speed. If your objects are only composed of a few pixels, you'll the full resolution, for big objects reducing the resolution will be faster.
78+
+ `Resolution` : this is the pixel size used. This is a trade-off between accuracy and speed. If your objects are only composed of a few pixels, you'll want the full resolution, for big objects reducing the resolution will be faster.
7979
+ `Features` : this is the core of the process -- where you choose the filters. In `Edit`, you'll need to choose :
8080
- The fluorescence channels
8181
- The scales, eg. the size of the filters applied to the image. The bigger, the coarser the filter is. Again, this will depend on the size of the objects you want to segment.
@@ -108,7 +108,7 @@ You will first need to export those with the `exportPixelClassifierProbabilities
108108

109109
Then the segmentation script can :
110110

111-
+ find punctal objects as polygons (with a shape) or points (punctal) than can be counted.
111+
+ find punctual objects as polygons (with a shape) or points (punctual) that can be counted.
112112
+ trace fibers with skeletonization to create lines whose lengths can be measured.
113113

114114
Several parameters have to be specified by the user, see the segmentation script [API reference](api-script-segment.md). This script will generate [GeoJson](tips-formats.md#json-and-geojson-files) files that can be imported back to QuPath with the `importGeojsonFiles.groovy` script.
@@ -140,4 +140,4 @@ QuPath extension : [https://github.com/ksugar/qupath-extension-sam](https://gith
140140
Original repositories : [samapi](https://github.com/ksugar/samapi), [SAM](https://github.com/facebookresearch/segment-anything)
141141
Reference papers : [doi:10.1101/2023.06.13.544786](https://doi.org/10.1101/2023.06.13.544786), [doi:10.48550/arXiv.2304.02643](https://doi.org/10.48550/arXiv.2304.02643)
142142

143-
This is more an interactive annotation tool than a fully automatic segmentation algorithm.
143+
This is more an interactive annotation tool than a fully automatic segmentation algorithm.

0 commit comments

Comments
 (0)