weird results of inferences #2118
Replies: 2 comments 1 reply
-
Hi, Padim only uses good images for training, so model should in both cases predict the same value. However, images in test set are used for normalization, so that's why you see different results when you have more images with defects. The normalization uses threshold values that provide the best F1 score. This can lead to some results that are not easily explainable, as would be in this case I assume. Threshold that over the entire test set gives the optimal F1 could mean that some cases (like this 018.png) get some false positives, but overall the segmentation and labels should be better. |
Beta Was this translation helpful? Give feedback.
-
This is not an issue, it's how the thresholding and normalization is handled. Image level (anomaly score) and pixel-level (anomaly maps) predictions undergo a separate normalization and thresholding. I am not sure how this would behave if they min and max were combined (max should actually be same already, but not the min). Most works do this separately, but if you want to you can set manual thresholds. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I'm experiencing some strange results in my inferences and would like to understand if I'm doing something wrong or if I'm missing something.
I'm using anomalib 1.2.0dev and I've two different setup using the metal_nut dataset from MVTec using PADiM on both:
First part results
Every inference has an high value, even for the good, and this let to a green anomaly map.

Second part results
Training the model with only 1 bad image gives this results:
Example of good
Example of bad
Questions
Why there is so much difference between this two experiments?
Also, the difference between the pred_score and the anomaly_map.max(), in the case if the first part, lead to this problem where the image is good, but the segmentation result will have something, is this intended? In the second part, 0.528 means that the model isn't sure, but the anomaly map max is at 1.0.
Code used
TRAINING
INFERENCE
Beta Was this translation helpful? Give feedback.
All reactions