Skip to content
Discussion options

You must be logged in to vote

@GunelLuo
The implementation in Anomalib is based on the codebase here https://github.com/pangdatangtt/UniNet
There are a couple of reasons for the discrepancy.

  1. In Anomalib, we collect the benchmark for all the models based on the same seed. In papers, numbers are generally collected from average of multiple runs or best among multiple runs. This might explain the difference in numbers.
  2. The reference codebase uses dataset and category specific parameters in some cases. For instance, https://github.com/pangdatangtt/UniNet/blob/0c211bacca7070b4b46a6841bb03b47800e25fd2/main.py#L122
    https://github.com/pangdatangtt/UniNet/blob/0c211bacca7070b4b46a6841bb03b47800e25fd2/train_unsupervisedAD.py#L44

Replies: 2 comments

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Answer selected by rajeshgangireddy
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants