Different inference results between OpenVINO and Torch #2260
Unanswered
FedericoDeBona
asked this question in
Q&A
Replies: 1 comment
-
As said in #2136, if I manually resize the images to 256x256 before inference it works fine. So it's missing the auto resize in OpenVINO while Torch has it? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I've trained a Padim on a custom dataset, then exported the model in both Torch and OpenVINO formats. The problem is that I get different inference results:


Torch
OpenVINO
Anomalib version:
1.2.0dev
OS:
Ubuntu
Same problem with anomalib
1.1.1
installed with pip.When using Torch, the prediction score is always better; with good images, the score is much lower than when using OpenVINO.
There is not much difference between Torch GPU vs Torch CPU. I would like to use OpenVINO on CPU, but like this is not usable. Any tip on this?
code used to train:
Beta Was this translation helpful? Give feedback.
All reactions