Skip to content

Unable to reproduce the result as paper reported #8

@greatwallet

Description

@greatwallet

Hello, thank you for your inspiring work! I tried to re-run the code on all of the datasets, but the results were not as promising as those repored by paper.

For CUB and DeepFashion, I did not modify any of the codes, but the metrics on CUB were poor.
image

And for Pascal-Part, I modified the training hyper-params according to your supp file. And also for the fairness of evaluation, I trained a foreground segmentator (a DeepLabV2-ResNet50-2branch) and used the predicted mask at evaluation. I conducted training upon Car, Cat and Horse. The results on Horse were OK, but those on Cat and Car were really not good.

image

Could you please provide analysis upon why the re-run results on CUB were not good? Also, for Pascal-Part, are there any training details that is left out in papers, so that I did not reproduce the results? (BTW, could you provide supervised mask of PascalPart?)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions