Skip to content

evaluation script gives wrong accuracyΒ #7

@ovidiunitu

Description

@ovidiunitu

While testing my solution I noticed this odd behavior. (See the picture below)

image
As you can see, my generated answer is 'none'.
According to the evaluation metric the correct accuracy should be 30% because there is one answer the same as mine.
I think this is happening because of the processing done before evaluation. In file vqaEval.py line 42, the answer 'none' is replaced with '0' . Because there is not any '0' in ground truth answers, the accuracy is set to 0.00%. If I remove 'none': '0', from manualMap dictionary I get the right accuracy for this question (30%).

If it helps, the id of this question is 411188011, and the name of the picture is COCO_val2014_000000411188.jpg

Can you look more into it? I hope I didn't miss anything.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions