You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/quality_control.md
+9-63Lines changed: 9 additions & 63 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,9 +6,9 @@ Quality control is a collection of **evaluations** based on sets of **metrics**
6
6
7
7
`QCEvaluation`s should be generated during pipelines: before raw data upload, during processing, and during analysis by researchers.
8
8
9
-
Each `QualityControl`, `QCEvaluation`, and `QCMetric`includes a `aind_data_schema.quality_control.State` which is a timestamped object indicating that the Overall QC/Evaluation/Metric passes, fails, or is in a pending state waiting for manual annotation.
9
+
The overall `QualityControl`, each `QCEvaluation`, and each `QCMetric`can be evaluated to get a `aind_data_schema.quality_control.State` which indicates whether the Overall QC/Evaluation/Metric passes, fails, or is in a pending state waiting for manual annotation.
10
10
11
-
The state of an evaluation is set automatically to the lowest of its metric's states. A single failed metric sets an entire evaluation to fail. While a single pending metric (with all other metrics passing) sets an entire evaluation to pending. An optional setting `QCEvaluation.allow_failed_metrics` allows you to ignore failures, which can be useful in situations where an evaluation is not critical for quality control.
11
+
The state of an evaluation is set automatically to the lowest of its metric's states. A single failed metric sets an entire evaluation to fail. A single pending metric (with all other metrics passing) sets an entire evaluation to pending. An optional setting `QCEvaluation.allow_failed_metrics` allows you to ignore failures, which can be useful in situations where an evaluation is not critical for quality control.
12
12
13
13
## Details
14
14
@@ -30,75 +30,21 @@ Each `QCMetric` is a single value or set of values that can be computed, or obse
30
30
31
31
`QCMetric`s have a `Status`. The `Status` should depend directly on the `QCMetric.value`, either by a simple function: "value>5", or by a qualitative rule: "Field of view includes visual areas". The `QCMetric.description` field should be used to describe the rule used to set the status. Metrics can be evaluated multiple times, in which case the new status should be appended the `QCMetric.status_history`.
32
32
33
+
**Q: What is a metric reference?**
34
+
33
35
Metrics should include a `QCMetric.reference`. References are intended to be publicly accessible images, figures, combined figures with multiple panels, or videos that support the metric or provide information necessary for manual annotation of a metric's status.
34
36
37
+
See the AIND section for specifics about how references are rendered in the QC Portal.
38
+
35
39
**Q: What are the status options for metrics?**
36
40
37
-
In our quality control a metric's status is always `PASS`, `PENDING` (waiting for manual annotation), or `FAIL`.`PENDING` should be used when a user must manually annotated the metric's state.
41
+
In our quality control a metric's status is always `PASS`, `PENDING` (waiting for manual annotation), or `FAIL`.
38
42
39
43
We enforce this minimal set of states to prevent ambiguity and make it easier to build tools that can interpret the status of a data asset.
40
44
41
45
## Details for AIND users
42
46
43
-
### Uploading QC
44
-
45
-
#### Preferred workflow
46
-
47
-
**Metadata**
48
-
49
-
If you are building `QCEvaluation` and `QCMetric` objects prior to raw data upload or in a capsule alongside your processing or analysis, your workflow is:
50
-
51
-
```
52
-
from aind_data_schema.core.quality_control import QualityControl
53
-
54
-
# Build your QCEvaluations and metrics
55
-
evaluations = [QCEvaluation(), ...]
56
-
57
-
# Build your QualityControl object
58
-
qc = QualityControl(evaluations=evaluations)
59
-
60
-
qc.write_standard_file()
61
-
```
62
-
63
-
The indexer will pick up this file alongside the other metadata files and handle it appropriately.
64
-
65
-
**References**
66
-
67
-
Each `QCMetric` you build should have an attached reference. Our preference is that you post these images to [FigURL](https://github.com/flatironinstitute/figurl/blob/main/doc/intro.md) and put the generated URL into the reference.
68
-
69
-
We recommend that you write PNG files for images and static multi-panel figures, MP4 files for videos, and Altair charts for interactive figures.
70
-
71
-
#### Alternate workflows
72
-
73
-
**Metadata**
74
-
75
-
We'll post documentation on how to append `QCEvaluations` to pre-existing quality_control.json files, via DocDB using the `aind-data-access-api`, in the future. For now, you can refer to the code snippet in the [`aind-qc-capsule-example`](https://github.com/AllenNeuralDynamics/aind-qc-capsule-example/).
76
-
77
-
**References**
78
-
79
-
You can also place the references in the data asset itself, to do this include the relative path "qc_images/your_image.png" to that asset inside of the results folder.
80
-
81
-
### QC Portal
82
-
83
-
The QC Portal is a browser application that allows users to view and interact with the AIND QC metadata and to annotate ``PENDING`` metrics with qualitative evaluations. The portal is maintained by scientific computing, reach out to us with any questions or concerns.
84
-
85
-
The portal works by pulling the metadata object from the Document Database (DocDB). When you make changes to metrics or add notes the **submit** button will be enabled, submitting pushes your updates to the DocDB along with a timestamp and your name.
86
-
87
-
**Q: When does the state get set for the QCEvaluation and QualityControl objects?**
88
-
89
-
The QC portal automatically calls ``QualityControl.evaluate_status()`` whenever you submit changes to the metrics. This first evaluates the individual `QCEvaluation` objects, and then evaluates the overall status.
90
-
91
-
**Q: How do reference URLs get pulled into the QC Portal?**
92
-
93
-
Each metric is associated with a reference figure (PDF preferred), image (png preferred), or video (mp4 preferred). The QC portal can interpret four ways of setting the reference field:
94
-
95
-
- Provide a relative path to a file in this data asset's S3 bucket, i.e. "figures/my_figure.png". The mount/asset name should not be included.
96
-
- Provide a url to a FigURL figure
97
-
- Provide the name of one of the interactive plots, e.g. "ecephys-drift-map"
98
-
99
-
**Q: I saw fancy things like dropdowns in the QC Portal, how do I do that?**
100
-
101
-
By default the QC portal displays dictionaries as tables where the values can be edited. We also support a few special cases to allow a bit more flexibility or to constrain the actions that manual annotators can take. Install the [`aind-qcportal-schema`](https://github.com/AllenNeuralDynamics/aind-qcportal-schema/blob/dev/src/aind_qcportal_schema/metric_value.py) package and set the `value` field to the corresponding pydantic object to use these.
47
+
Instructions for uploading QC for viewing in the QC portal can be found [here](https://github.com/AllenNeuralDynamics/aind-qc-portal).
102
48
103
49
### Multi-asset QC
104
50
@@ -110,4 +56,4 @@ You should follow the preferred/alternate workflows described above. If your mul
110
56
111
57
**Q: I want to be able to store data about each of the evaluated assets in this metric**
112
58
113
-
Take a look at the `MultiAssetMetric` class in `aind-qc-portal-schema`. It allows you to pass a list of values which will be matched up with the `evaluated_assets` names. You can also include options which will appear as dropdowns or checkboxes.
59
+
Take a look at the `MultiAssetMetric` class in `aind-qc-portal-schema`. It allows you to pass a list of values which will be matched up with the `evaluated_assets` names. You can also include options which will appear as dropdowns or checkboxes.
"notes": "Moved Y to avoid blood vessel, X to avoid edge. Mouse made some noise during the recording with a sudden shift in signals. Lots of motion. Maybe some implant motion.",
40
-
"primary_targeted_structure": "LGd",
40
+
"primary_targeted_structure": {
41
+
"atlas": "CCFv3",
42
+
"name": "Dorsal part of the lateral geniculate complex",
"notes": "Trouble penetrating. Lots of compression, needed to move probe. Small amount of surface bleeding/bruising. Initial Target: X;10070.3\tY:7476.6",
"notes": "Moved Y to avoid blood vessel, X to avoid edge. Mouse made some noise during the recording with a sudden shift in signals. Lots of motion. Maybe some implant motion.",
175
-
"primary_targeted_structure": "LGd",
185
+
"primary_targeted_structure": {
186
+
"atlas": "CCFv3",
187
+
"name": "Dorsal part of the lateral geniculate complex",
"notes": "Trouble penetrating. Lots of compression, needed to move probe. Small amount of surface bleeding/bruising. Initial Target: X;10070.3\tY:7476.6",
0 commit comments