Compute AUPRC? #698
-
|
Currently I could only use |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 4 replies
-
|
You can use the |
Beta Was this translation helpful? Give feedback.
-
|
Consider using Average Precision instead. they are almost the same, (not exactly, but almost). |
Beta Was this translation helpful? Give feedback.
-
|
Building on @SkafteNicki's and @el-hult's suggestions — as of TorchMetrics v1.9.0, you don't need to manually compute AUPRC anymore. The from torchmetrics.classification import BinaryAveragePrecision
# Binary case
metric = BinaryAveragePrecision(thresholds=None) # exact computation
preds = torch.tensor([0.1, 0.4, 0.35, 0.8])
target = torch.tensor([0, 0, 1, 1])
metric(preds, target) # tensor(0.8333)For multiclass: from torchmetrics.classification import MulticlassAveragePrecision
metric = MulticlassAveragePrecision(num_classes=5, average="macro", thresholds=None)For multilabel: from torchmetrics.classification import MultilabelAveragePrecision
metric = MultilabelAveragePrecision(num_labels=3, average="macro", thresholds=None)Performance tip: if memory is a concern (large datasets), pass The old Docs: AveragePrecision |
Beta Was this translation helpful? Give feedback.

Building on @SkafteNicki's and @el-hult's suggestions — as of TorchMetrics v1.9.0, you don't need to manually compute AUPRC anymore. The
AveragePrecisionmetric is mathematically equivalent to AUPRC (area under the precision-recall curve) and is a first-class citizen:For multiclass: