You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
where :math:`\alpha_i` is the set consisting of the most specific classes predicted
23
-
for test example :math:`i` and all their ancestor classes, while :math:`\beta_i` is the
24
-
set containing the true most specific classes of test example :math:`i` and all
25
-
their ancestors, with summations computed over all test examples.
19
+
Compute hierarchical precision score.
26
20
27
21
Parameters
28
22
----------
29
23
y_true : np.array of shape (n_samples, n_levels)
30
24
Ground truth (correct) labels.
31
25
y_pred : np.array of shape (n_samples, n_levels)
32
26
Predicted labels, as returned by a classifier.
27
+
average: {"micro", "macro"}, str, default="micro"
28
+
This parameter determines the type of averaging performed during the computation:
29
+
30
+
- `micro`: The precision is computed by summing over all individual instances, :math:`\displaystyle{hP = \frac{\sum_{i=1}^{n}| \alpha_i \cap \beta_i |}{\sum_{i=1}^{n}| \alpha_i |}}`, where :math:`\alpha_i` is the set consisting of the most specific classes predicted for test example :math:`i` and all their ancestor classes, while :math:`\beta_i` is the set containing the true most specific classes of test example :math:`i` and all their ancestors, with summations computed over all test examples.
31
+
- `macro`: The precision is computed for each instance and then averaged, :math:`\displaystyle{hP = \frac{\sum_{i=1}^{n}hP_{i}}{n}}`, where :math:`\alpha_i` is the set consisting of the most specific classes predicted for test example :math:`i` and all their ancestor classes, while :math:`\beta_i` is the set containing the true most specific classes of test example :math:`i` and all their ancestors.
32
+
33
33
Returns
34
34
-------
35
35
precision : float
36
36
What proportion of positive identifications was actually correct?
This parameter determines the type of averaging performed during the computation:
119
+
120
+
- `micro`: The recall is computed by summing over all individual instances, :math:`\displaystyle{hR = \frac{\sum_{i=1}^{n}|\alpha_i \cap \beta_i|}{\sum_{i=1}^{n}|\beta_i|}}`, where :math:`\alpha_i` is the set consisting of the most specific classes predicted for test example :math:`i` and all their ancestor classes, while :math:`\beta_i` is the set containing the true most specific classes of test example :math:`i` and all their ancestors, with summations computed over all test examples.
121
+
- `macro`: The recall is computed for each instance and then averaged, :math:`\displaystyle{hR = \frac{\sum_{i=1}^{n}hR_{i}}{n}}`, where :math:`\alpha_i` is the set consisting of the most specific classes predicted for test example :math:`i` and all their ancestor classes, while :math:`\beta_i` is the set containing the true most specific classes of test example :math:`i` and all their ancestors.
122
+
72
123
Returns
73
124
-------
74
125
recall : float
75
126
What proportion of actual positives was identified correctly?
This parameter determines the type of averaging performed during the computation:
218
+
219
+
- `micro`: The f-score is computed by summing over all individual instances, :math:`\displaystyle{hF = \frac{2 \times hP \times hR}{hP + hR}}`, where :math:`hP` is the hierarchical precision and :math:`hR` is the hierarchical recall.
220
+
- `macro`: The f-score is computed for each instance and then averaged, :math:`\displaystyle{hF = \frac{\sum_{i=1}^{n}hF_{i}}{n}}`, where :math:`\alpha_i` is the set consisting of the most specific classes predicted for test example :math:`i` and all their ancestor classes, while :math:`\beta_i` is the set containing the true most specific classes of test example :math:`i` and all their ancestors.
0 commit comments