You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm looking to create precision recall curves per class using the MeanAveragePrecision API. Precision is extracted easily enough using something like the following, but is recall missing a dimension? My understanding is that the PR curve is generated by calculating the precision-recall pairs while varying the confidence threshold [0.0-1.0].
p = metrics.compute()["precision"]
p_pairs = p[iou_idx, :, :, 0, 2] # extracts [num_pr_pairs, num_classes]
r = metrics.compute()["recall"]
r_pairs = ... # missing the R dimension
My best guess right now is that recall_thresholds should be used in PR pairs and some form of interpolation is occurring instead of recall being explicitly computed.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I'm looking to create precision recall curves per class using the MeanAveragePrecision API. Precision is extracted easily enough using something like the following, but is recall missing a dimension? My understanding is that the PR curve is generated by calculating the precision-recall pairs while varying the confidence threshold [0.0-1.0].
My best guess right now is that recall_thresholds should be used in PR pairs and some form of interpolation is occurring instead of recall being explicitly computed.
Beta Was this translation helpful? Give feedback.
All reactions