Doubt with the accuracy functión #789
Unanswered
chechoreyes
asked this question in
Q&A
Replies: 2 comments
-
Put these commands below after the test_preds = torch.softmax... statement and analyze the outcomes.
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi @chechoreyes , Yes, you're correct. We compare the labels (e.g. We then divide by the number of samples to get accuracy as a percentage. As in the # Calculate accuracy (a classification metric)
def accuracy_fn(y_true, y_pred):
"""Calculates accuracy between truth labels and predictions.
Args:
y_true (torch.Tensor): Truth labels for predictions.
y_pred (torch.Tensor): Predictions to be compared to predictions.
Returns:
[torch.float]: Accuracy value between y_true and y_pred, e.g. 78.45
"""
correct = torch.eq(y_true, y_pred).sum().item()
acc = (correct / len(y_pred)) * 100
return acc Perhaps you could try enter two example tensors in the function above and see what happens? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Video 88
Time: 13:39:04
Hi, i have a question:
Why in this part:
In the accuracy function, you compares
y_bloob_test
(or train) andtest_preds
(or train), but i don't undestand. First,y_blob_test
is a tensor with the labels andy_pred
is the result oftorch.softmax(y_logits, dim=1).argmax(dim=1)
what are the highest probability indices predicted, then, you compares two different things or PyTorch assumes the value of that index?Thanks!
Beta Was this translation helpful? Give feedback.
All reactions