Impact of Imbalance validation dataset on threshold calculation #2174
Unanswered
lathashree01
asked this question in
Q&A
Replies: 1 comment 1 reply
-
What do you mean by "... performance drop than the previous iteration." ? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I am performing a trial anomaly detection with patchcore and I see a performance drop than the previous iteration. There are no changes in model, however the data is quite imbalanced now i.e. I have higher normal examples (more than 2X) than anomalous.
As I am using F1 based threshold ((#1896) calculation so could this imbalance cause performance drop? but shouldn't the F1 be more robust than other metrics and help with this problem?
In a real-world case, usually we have more normal images than anomalous, how to get around this bottleneck?
Or is the imbalance not really the cause for this problem?
Thanks for help.
Beta Was this translation helpful? Give feedback.
All reactions