-
Notifications
You must be signed in to change notification settings - Fork 475
Description
🚀 Feature
Currently the BinaryAUROC metric update step expects a square matrix at each iteration. In a multi-output training regiment involving masked outputs, the current AUROC metrics class fails to offer support.
This feature proposes to provide an additional metric, MaskedBinaryAUROC, to support calculating label-specific and aggregate AUROC when each batch step leverages sparse/nested/masked labels.
Motivation
I had to create a custom class for this functionality for a professional project, and I'd like to contribute it. Also, my company would like to become involved in the open source community by promoting code when possible.
Pitch
Create a MaskedBinaryAUROC metric with similar functionality to AUROC, but it will also take a mask at each iteration and only consider unmasked values in the final calculations.
The implementation is straightforward:
- preds/targets/mask states with List defaults
.update(preds, targets, mask)by appending to the list.compute()will iterate through each column to calculate a per-label value by leveragingtorch.metrics.functional.binary_aurocafter applying the mask- return mean of all column-wise auroc values
Alternatives
SparseBinaryAUROC: support a sparse tensor representation of the output labels instead of a dense output and a mask.- Extend functionality of
BinaryAUROCinstead of making a separate class
Additional context
There are additional related extensions of the AUROC, MulticlassAUROC, and MultilabelAUROC classes.