Data Sharing between Metrics #560
Unanswered
cemde
asked this question in
Classification
Replies: 1 comment
-
Hi @cemde, Sadly we do not have a generic way of sharing data between different metrics. We actually have an old issue for tracking this (#143) however have still not come up with any good solution. from torchmetrics import CalibrationError
from torchmetrics.functional.classification.calibration_error import _ce_compute
from torchmetrics.utilities.data import dim_zero_cat
class ManyCalibrationMetrics(CalibrationError):
def compute(self):
confidences = dim_zero_cat(self.confidences)
accuracies = dim_zero_cat(self.accuracies)
return {'l1': _ce_compute(confidences, accuracies, self.bin_boundaries, norm='l1'),
'max': _ce_compute(confidences, accuracies, self.bin_boundaries, norm='max'),
... whatever you else need} this metric will only keep one copy of |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I have multiple metrics, that share a lot of data in order to compute them. For example, I wish to calculate the Calibration Error (in l1 and infinity norm), the Adaptive Calibration Error, the Statistical Calibration Error. All of these have advantages and disadvantages, which is why I need to look at all of them. Is there an elegant way to share data from each
update
call and then just calculate the last bits individually? That would save a lot of memory.Alternatively it might be nice if, I can return a dict of metrics instead of a Tensor? For instance, if you have metrics that have certain decompositions that you are interested in:
return {'metricname_componentA': a, 'metricname_componentB': b, 'metricname_total': a+b }
Beta Was this translation helpful? Give feedback.
All reactions