Skip to content
Discussion options

You must be logged in to vote

For multi-labels we assume that you are providing data as [B,L] where B is the batch size and L is the number of labels (num_classes in your code). You are therefore missing the batch dimension, which is 1 in your case. A simple unsqueeze should be enough:

pred = torch.tensor([0, 0, 0, 0]).unsqueeze(0)
target = torch.tensor([0, 1, 0, 1]).unsqueeze(0)

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by Borda
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants