Skip to content
Discussion options

You must be logged in to vote

@JoeNatan30 — this was a bug in torchmetrics 0.6.x. Fully resolved now.

Current usage (v1.9.0):

from torchmetrics.classification import MulticlassAUROC

auroc = MulticlassAUROC(
    num_classes=NUM_CLASSES,
    average="macro",     # "macro", "weighted", or "none"
    thresholds=None,
)
score = auroc(preds, target)

Important: average="micro" is intentionally not supported for multiclass AUROC — it doesn't have a standard definition for the one-vs-rest case. The scikit-learn docs note this same subtlety.

If you need micro-averaged AUROC, convert to multilabel:

from torchmetrics.classification import MultilabelAUROC
import torch.nn.functional as F

target_onehot = F.one_hot(target, num_classes

Replies: 2 comments 1 reply

Comment options

You must be logged in to vote
1 reply
@JoeNatan30
Comment options

Comment options

You must be logged in to vote
0 replies
Answer selected by Borda
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
3 participants