Skip to content

Commit cc16130

Browse files
Update compute.py, io.py, fibers_coverage.ipynb, and 5 more files
1 parent 7a2a9c4 commit cc16130

File tree

8 files changed

+57
-45
lines changed

8 files changed

+57
-45
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ Steps 1-3 below need to be performed only once. If Anaconda or conda is already
2727
```
2828
6. (Optional) Download the latest release from [here](https://github.com/TeamNCMC/cuisto/releases/latest) (choose "Source code (zip)) and unzip it on your computer. You can copy the `scripts/` folder to get access to the QuPath and Python scripts. You can check the notebooks in `docs/demo_notebooks` as well !
2929
30-
The `cuisto` will be then available in Python from anywhere as long as the `cuisto-env` conda environment is activated. You can get started by looking and using the [Jupyter notebooks](#using-notebooks).
30+
The `cuisto` package will be then available in Python from anywhere as long as the `cuisto-env` conda environment is activated. You can get started by looking and using the [Jupyter notebooks](#using-notebooks).
3131
3232
For more complete installation instructions, see the [documentation](https://teamncmc.github.io/cuisto/main-getting-started.html#slow-start).
3333

cuisto/compute.py

Lines changed: 15 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -18,11 +18,19 @@ def get_regions_metrics(
1818
metrics_names: dict,
1919
) -> pd.DataFrame:
2020
"""
21-
Get a new DataFrame with cumulated axons segments length in each brain regions.
21+
Derive metrics from `meas_base_name`.
2222
23-
This is the quantification per brain regions for fibers-like objects, eg. axons. The
24-
returned DataFrame has columns "cum. length µm", "cum. length mm", "density µm^-1",
25-
"density mm^-1", "coverage index".
23+
The measurements columns of `df_annotations` must be properly formatted, eg :
24+
object_type: channel meas_base_name
25+
26+
Derived metrics include :
27+
- raw measurement
28+
- areal density
29+
- relative raw measurement
30+
- relative density
31+
32+
Supports objects that are counted (polygons or points) and objects whose length is
33+
measured (fibers-like).
2634
2735
Parameters
2836
----------
@@ -34,7 +42,9 @@ def get_regions_metrics(
3442
channel_names : dict
3543
Map between original channel names to something else.
3644
meas_base_name : str
45+
Base measurement name in the input DataFrame used to derive metrics.
3746
metrics_names : dict
47+
Maps hardcoded measurement names to display names.
3848
3949
Returns
4050
-------
@@ -86,7 +96,7 @@ def get_regions_metrics(
8696
df_regions["Area mm^2"] = df_regions["Area µm^2"] / 1e6
8797

8898
# prepare metrics
89-
if "µm" in meas_base_name:
99+
if meas_base_name.endswith("µm"):
90100
# fibers : convert to mm
91101
cols_to_convert = pd.Index([col for col in cols_colors if "µm" in col])
92102
df_regions[cols_to_convert.str.replace("µm", "mm")] = (

cuisto/io.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -117,8 +117,8 @@ def cat_json_dir(
117117
"""
118118
Scans a directory for json files and concatenate them in a single DataFrame.
119119
120-
The json files must be generated with 'pipelineImportExport.groovy" from a QuPath
121-
project.
120+
The json files must be generated with 'pipelineImportExport.groovy" or
121+
'exportFibersAtlasCoordinates.groovy' from a QuPath project.
122122
123123
Parameters
124124
----------

docs/demo_notebooks/fibers_coverage.ipynb

Lines changed: 21 additions & 27 deletions
Large diffs are not rendered by default.

docs/guide-prepare-qupath.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ Those information are used to perform the quantification in each Annotation with
4949
While you're free to add any measurements as long as they follow [the requirements](#qupath-requirements), keep in mind that for atlas regions quantification, `cuisto` will only compute, pool and average the following metrics :
5050

5151
- the base measurement itself
52-
- if "µm" is contained in the measurement name, it will also be converted to mm (\(\div\)1000)
52+
- if the measurement name finishes with "µm", it will also be converted to mm (\(\div\)1000)
5353
- the base measurement divided by the region area in µm² (density in something/µm²)
5454
- the base measurement divided by the region area in mm² (density in something/mm²)
5555
- the squared base measurement divided by the region area in µm² (could be an index, in weird units...)

docs/guide-qupath-objects.md

Lines changed: 15 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ Then, choose the following options :
4646
: Might be useful to check if the images are read correctly (mostly for CZI files).
4747

4848
## Detect objects
49-
To use be able to use `cuisto` directly after exporting QuPath data, there is a number of requirements and limitations regarding the QuPath Annotations and Detections names and classifications. However, the guides below should create objects with properly formatted data. See more about the requirements on [this page](guide-prepare-qupath.md).
49+
To be able to use `cuisto` directly after exporting QuPath data, there is a number of requirements and limitations regarding the QuPath Annotations and Detections names and classifications. However, the guides below should create objects with properly formatted data. See more about the requirements on [this page](guide-prepare-qupath.md).
5050

5151
### Built-in cell detection
5252

@@ -55,14 +55,14 @@ QuPath has a built-in cell detection feature, available in `Analyze > Cell detec
5555
Briefly, this uses a watershed algorithm to find bright spots and can perform a cell expansion to estimate the full cell shape based on the detected nuclei. Therefore, this works best to segment nuclei but one can expect good performance for cells as well, depending on the imaging and staining conditions.
5656

5757
!!! tip
58-
In `scripts/qupath-utils/segmentation`, there is `watershedDetectionFilters.groovy` which uses this feature from a script. It further allows you to filter out detected cells based on shape measurements as well as fluorescence itensity in several channels and cell compartments.
58+
In `scripts/qupath-utils/segmentation`, there is `watershedDetectionFilters.groovy` which uses this feature from a script. It further allows you to filter out detected cells based on shape measurements as well as fluorescence intensity in several channels and cell compartments.
5959

6060
### Pixel classifier
6161
Another very powerful and versatile way to segment cells is through machine learning. Note the term "machine" and not "deep" as it relies on statistics theory from the 1980s. QuPath provides an user-friendly interface to do that, similar to what [ilastik](https://www.ilastik.org/) provides.
6262

6363
The general idea is to train a model to classify every pixel as a signal or as background. You can find good resources on how to procede in the [official documentation](https://qupath.readthedocs.io/en/stable/docs/tutorials/pixel_classification.html) and some additionnal tips and tutorials on Michael Neslon's blog ([here](https://www.imagescientist.com/mpx-pixelclassifier) and [here](https://www.imagescientist.com/brightfield-4-pixel-classifier)).
6464

65-
Specifically, you will manually annotate some pixels of objects of interest and background. Then, you will apply some image processing filters (gaussian blur, laplacian...) to reveal specific features in your images (shapes, textures...). Finally, the pixel classifier will fit a model on those pixel values, so that it will be able to predict if a pixel, given the values with the different filters you applied, belongs to an object of interest or to the background. Even better, the pixels are *classified* in arbitrary classes *you* define : it supports any number of classes. In other word, one can train a model to classify pixels in a "background", "marker1", "marker2", "marker3"... classes, depending on their fluorescence color and intensity.
65+
Specifically, you will manually annotate some pixels of objects of interest and background. Then, you will apply some image processing filters (gaussian blur, laplacian...) to reveal specific features in your images (shapes, textures...). Finally, the pixel classifier will fit a model on those pixel values, so that it will be able to predict if a pixel, given the values with the different filters you applied, belongs to an object of interest or to the background. Even better, the pixels are *classified* in arbitrary classes *you* define : it supports any number of classes. In other word, one can train a model to classify pixels in "background", "marker1", "marker2", "marker3"... classes, depending on their fluorescence color and intensity.
6666

6767
This is done in an intuitive GUI with live predictions to get an instant feedback on the effects of the filters and manual annotations.
6868

@@ -77,15 +77,15 @@ First and foremost, you should use a QuPath project dedicated to the training of
7777
6. In `Advanced settings`, check `Reweight samples` to help make sure a classification is not over-represented.
7878
7. Modify the different parameters :
7979
+ `Classifier` : typically, `RTrees` or `ANN_MLP`. This can be changed dynamically afterwards to see which works best for you.
80-
+ `Resolution` : this is the pixel size used. This is a trade-off between accuracy and speed. If your objects are only composed of a few pixels, you'll want the full resolution, for big objects reducing the resolution will be faster.
80+
+ `Resolution` : this is the pixel size used. This is a trade-off between accuracy and speed. If your objects are only composed of a few pixels, full resolution will be needed, for big objects decreasing the resolution (bigger pixel size) will be faster.
8181
+ `Features` : this is the core of the process -- where you choose the filters. In `Edit`, you'll need to choose :
8282
- The fluorescence channels
8383
- The scales, eg. the size of the filters applied to the image. The bigger, the coarser the filter is. Again, this will depend on the size of the objects you want to segment.
8484
- The features themselves, eg. the filters applied to your images before feeding the pixel values to the model. For starters, you can select them all to see what they look like.
8585
+ `Output` :
8686
- `Classification` : QuPath will directly classify the pixels. Use that to [create objects directly from the pixel classifier](#built-in-create-objects) within QuPath.
8787
- `Probability` : this will output an image where each pixel is its probability to belong to each of the classifications. This is useful to [create objects externally](#probability-map-segmentation).
88-
8. In the bottom-right corner of the pixel classifier window, you can select to display each filters individually. Then in the QuPath main window, hitting ++c++ will switch the view to appreciate what the filter looks like. Identify the ones that makes your objects the most distinct from the background as possible. Switch back to `Show classification` once you begin to make annotations.
88+
8. In the bottom-right corner of the pixel classifier window, you can select to display each filters individually. Then in the QuPath main window, hitting ++c++ will switch the view to appreciate what the filter looks like. Identify the ones that make your objects the most distinct from the background as possible. Switch back to `Show classification` once you begin to make annotations.
8989
9. Begin to annotate ! Use the Polyline annotation tool (++v++) to classify **some** pixels belonging to an object and **some** pixels belonging to the background across your images.
9090

9191
!!! tip
@@ -98,8 +98,16 @@ First and foremost, you should use a QuPath project dedicated to the training of
9898

9999
11. Once you're done, give your classifier a name in the text box in the bottom and save it. It will be stored as a [JSON](tips-formats.md#json-and-geojson-files) file in the `classifiers` folder of the QuPath project. This file can be imported in your other QuPath projects.
100100

101+
To import the classifier in the actual QuPath project, head to the `Classify > Pixel classification > Load pixel classifier` menu, three-dotted menu and `Import from file`. Upon import, several actions are available : create objects, measure or classify. Alternatively, the prediction image (where each pixel is the probability for that pixel to belong to each of the classifications) can be segmented externally.
102+
101103
#### Built-in create objects
102-
Once you imported your model JSON file (`Classify > Pixel classification > Load pixel classifier`, three-dotted menu and `Import from file`), you can create objects out of it, measure the surface occupied by classified pixels in each annotation or classify existing detections based on the prediction at their centroid.
104+
The `Create objects` action will ask what where the objects should be created. If ABBA is being used, selecting "All annotations" will create objects in *each* annotation, which is not advised : because of the hierarchy, some annotations are *Parent* annotations, thus objects will be created multiple times (eg. detections will be created in "RN", "MBMot", "MB", "grey", "root" *and* "Root"). When using regions organized in a hierarchy, use "Full image" instead. Then some options are to be selected, including :
105+
106+
- New object type : typically detections
107+
- Minimum object size : objects smaller than this will be discarded,
108+
- Minimum hole size : holes within a single object smaller than this will be filled,
109+
- Split objects : multiple detections will be split into multiple objects, otherwise all detections will be a single object (checking this is recommended),
110+
- Delete existing objects : this will delete *everything*, including annotations.
103111

104112
!!! tip
105113
In `scripts/qupath-utils/segmentation`, there is a `createDetectionsFromPixelClassifier.groovy` script to batch-process your project.
@@ -127,7 +135,7 @@ Those measurements can then be used in `cuisto`, using "area µm^2" as the "base
127135
##### Classify
128136
This classifies existing detections based on the prediction at their centroid. A pixel classifier classifies every single pixel in your image into the classes it was trained on. Any object has a centroid, eg. a center of mass, which corresponds to a given pixel. The "Classify" button will classify a detection as the classification based on the classification predicted by the classifier of the pixel located at the detection centroid.
129137

130-
A typical use-case would be to create detections, for examples "cells stained in the DsRed channel", with a first pixel classifier (or any other means). Then, I would like to classify those cells as "positive" if they have a staining revealed in the EGFP channel, and as "negative" otherwise. To do this, I would train a second pixel classifier that simply would classify pixels to "Cells: positive" if they have a significant amount of green fluorescence, and "Cells: negative" otherwise. Note that in this case, it does not matter if the pixels do not actually belong to a cell, as it will only be used to classify *existing* detections - we do not use the Ignore\* class. Subsequently, I would import the second pixel classifier and use the "Classify" button.
138+
A typical use-case would be to create detections, for examples "cells stained in the DsRed channel", with a first pixel classifier (or any other means). Then, the detected cells need to be classified : I want to classify them as "positive" if they have a staining revealed in the EGFP channel, and as "negative" otherwise. To do this, I would train a second pixel classifier that simply classifies pixels to "Cells: positive" if they have a significant amount of green fluorescence, and "Cells: negative" otherwise. Note that in this case, it does not matter if the pixels do not actually belong to a cell, as it will only be used to classify *existing* detections - we do not use the Ignore\* class. Subsequently, I would import the second pixel classifier and use the "Classify" button.
131139

132140
!!! info inline end
133141
Similar results could be achieved with an *object classifier* instead of a pixel classifier but will not be covered here. You can check the [QuPath tutorial](https://qupath.readthedocs.io/en/stable/docs/tutorials/cell_classification.html#calculate-additional-features) to see how to procede.

docs/guide-register-abba.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ To do so :
111111
5. Add as many landmarks as needed, when you're done, find the Fiji window called "Big Warp registration" that opened at the beginning and click `OK`.
112112

113113
!!! tip "Important remarks and tips"
114-
+ A landmark is a location where you said "this location correspond to this one". Therefore, BigWarp is not allowed to move this particular location. Everywhere else, it is free to transform the image without any restrictions, including the borders. Thus, it is a good idea to **delimit the coarse contour of the brain with landmarks** to constrain the registration.
114+
+ A landmark is a location where you said "this location correspond to this one". Therefore, BigWarp is not allowed to move this particular location. Everywhere else, it is free to transform the image without any restrictions, including the borders. Thus, it is a good idea to **delimit the coarse contour of the brain with landmarks** to constrain the deformations.
115115
+ ++left-button++ without holding ++ctrl++ will place a landmark in the fixed image only, without pair, and BigWarp won't like it. To **delete landmarks**, head to the "Landmarks" window that lists all of them. They highlight in the viewer upon selection. Hit ++del++ to delete one. Alternatively, click on it on the viewer and hit ++del++.
116116

117117
#### From a previous registration

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "cuisto"
3-
version = "2025.01.10"
3+
version = "2025.01.12"
44
authors = [{ name = "Guillaume Le Goc", email = "g.legoc@posteo.org" }]
55
description = "Quantification of objects in histological slices"
66
readme = "README.md"

0 commit comments

Comments
 (0)