Filter out the major and minor defects from prediction mask #767
Replies: 1 comment 6 replies
-
Hi, thanks for your question. I agree that it is currently not straightforward to change the sensitivity of your models. One way to achieve this would be to train your models with a manual threshold, of which you could vary the value to change the sensitivity of your model. Currently, this is a bit cumbersome as the manual threshold can only be set during training (though a PR has just been submitted which would allow you to set a different threshold value during inference). To change the threshold value, just set The tricky part will be to determine what would be a good threshold value. Since the threshold value is applied before normalizing the predicted anomaly scores, the range of possible values is unbounded and may vary widely between different models. A good approach could be to first train the model with adaptive thresholding enabled, and then inspect the computed adaptive threshold value to get an idea. You could then decrease or increase that value by trial and error to make the model more or less sensitive. Hope that helps. If anything is still unclear, please do not hesitate to reach out! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am interested to know the way of filtering major and minor defects from test images. Few versions back of anomaly module there was a function in
base.py
script nameddef _superimpose_segmentation_mask(self, meta_data: dict, anomaly_map: np.ndarray, image: np.ndarray):
for superimpose segmentation mask on top of images, and the location of the script wasdeploy-> inferencers-> base.py
, the script has been renamed tobase_inferencers.py
in current version. For normalized output prediction, it was easier back then to filter out prediction mask based oncompute_mask()
anomaly map, by default it's 0.5 and by changing this value in previous module was easier to filter out the defects from prediction mask, what to do if I would like to do call the same function of current module, let's say I am training and testing a custom dataset based on patchcore model concept or which function should I call to filter out the of prediction mask score and show only the results after filtering.Is it possible to get a features for filtering out the major and minor defects from prediction mask in the config file?
Any feedback would be appreciated, thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions