Data Sharing between Metrics #560
-
|
I have multiple metrics, that share a lot of data in order to compute them. For example, I wish to calculate the Calibration Error (in l1 and infinity norm), the Adaptive Calibration Error, the Statistical Calibration Error. All of these have advantages and disadvantages, which is why I need to look at all of them. Is there an elegant way to share data from each |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
|
Hi @cemde, Sadly we do not have a generic way of sharing data between different metrics. We actually have an old issue for tracking this (#143) however have still not come up with any good solution. from torchmetrics import CalibrationError
from torchmetrics.functional.classification.calibration_error import _ce_compute
from torchmetrics.utilities.data import dim_zero_cat
class ManyCalibrationMetrics(CalibrationError):
def compute(self):
confidences = dim_zero_cat(self.confidences)
accuracies = dim_zero_cat(self.accuracies)
return {'l1': _ce_compute(confidences, accuracies, self.bin_boundaries, norm='l1'),
'max': _ce_compute(confidences, accuracies, self.bin_boundaries, norm='max'),
... whatever you else need}this metric will only keep one copy of |
Beta Was this translation helpful? Give feedback.
Hi @cemde,
Sadly we do not have a generic way of sharing data between different metrics. We actually have an old issue for tracking this (#143) however have still not come up with any good solution.
In your specific case, maybe the best way forward would be to create a custom metric that subclasses from Calibration error in this way: