Skip to content
Discussion options

You must be logged in to vote

Building on @SkafteNicki's and @el-hult's suggestions — as of TorchMetrics v1.9.0, you don't need to manually compute AUPRC anymore. The AveragePrecision metric is mathematically equivalent to AUPRC (area under the precision-recall curve) and is a first-class citizen:

from torchmetrics.classification import BinaryAveragePrecision

# Binary case
metric = BinaryAveragePrecision(thresholds=None)  # exact computation
preds = torch.tensor([0.1, 0.4, 0.35, 0.8])
target = torch.tensor([0, 0, 1, 1])
metric(preds, target)  # tensor(0.8333)

For multiclass:

from torchmetrics.classification import MulticlassAveragePrecision

metric = MulticlassAveragePrecision(num_classes=5, average="macro", thresholds=

Replies: 3 comments 4 replies

Comment options

You must be logged in to vote
3 replies
@angerhang
Comment options

@Borda
Comment options

@el-hult
Comment options

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@el-hult
Comment options

Answer selected by Borda
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
5 participants