Difference in UniNet MVTec I-AUROC between anomalib benchmarking and official paper #2879
-
First of all, thank you for the great work on anomalib and for including UniNet in the benchmarking. I noticed that all scores for UniNet on the MVTec AD dataset in the anomalib benchmarking results are lower than those reported in the official UniNet paper. Could you please share the differences in training/inference setup that might explain the gap. Thanks in advance! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Hi @GunelLuo |
Beta Was this translation helpful? Give feedback.
-
@GunelLuo
|
Beta Was this translation helpful? Give feedback.
@GunelLuo
The implementation in Anomalib is based on the codebase here https://github.com/pangdatangtt/UniNet
There are a couple of reasons for the discrepancy.
https://github.com/pangdatangtt/UniNet/blob/0c211bacca7070b4b46a6841bb03b47800e25fd2/train_unsupervisedAD.py#L44