Skip to content

relatively large performance gap on ScanObjectNN #13

@auniquesun

Description

@auniquesun

@MohamedAfham Recently, I have run all experiments in the codebase at least 3 times to ensure there are not explicit exceptions during my operations.

Some of the results are very encouraging, which means they are comparable with the paper reported, sometimes even higher than that in the paper, e.g. the reproduced results on ModelNet. But some are not.

Specifically, for the downstream task few-shot classification on ScanObjectNN, the performance gap is relatively large, e.g.,

  1. for 5 way, 10 shot, I got 72.5 ± 8.33,
  2. for 5 way, 20 shot, I got 82.5 ± 5.06,
  3. for 10 way, 10 shot, I got 59.4 ± 3.95,
  4. for 10 way, 20 shot, I got 67.8 ± 4.41

For the downstream task linear SVM classification on ScanObjectNN, the reproduced performance is 75.73%. All experiments use the DGCNN backbone and default settings except for the batch size.

In short, all of results are behind the reported peformances on ScanObjectNN in the paper, by a large margin.

At this point, I wonder whether there are some precautions when experimenting on ScanObjectNN, and what possible reasons are. Can you provide some suggestions? thank you.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions