Replies: 3 comments 1 reply
-
You could use TorchInferencer with a gpu device option. That'd do a gpu inference on an nvidia gpu. Alternatively you could also use new Engine API, more specifically Engine.predict method. OpenVINO is optimized for Intel hardware |
Beta Was this translation helpful? Give feedback.
-
As Samet already pointed out, you could use the TorchInferencer, there are some examples in the docs. However, 1000 FPS is really really high requirement. Even the fastest unsupervised model available inside Anomalib (and I think even in general) EfficientAD reaches around 500 FPS on a RTX A6000. There are some supervised methods, that are probably even faster, but probably not near the 1000 mark. So if you need speed, go with EfficientAD, but I'm not sure if that can get to 1000FPS without using some optimized format like maybe OpenVINO on a ARC gpu, or some tensort things, but I can't be sure for that as I'm not experienced in that regard. |
Beta Was this translation helpful? Give feedback.
-
@blaz-r Hi, I am interested in this as well. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I've trained a model (Padim with resnet 50_2) with the anomalib package (python). The recall & precision are quite good, so I want to implement it for a demo.
Currently I'm using this code:
This works perfectly but only gets around 40 fps. When I try to use it with "gpu", the build fails (as I have Nvidia).
My application requires 1000 - 2000 fps. (which I've been able to do with CNN's).
How can I do interference on the GPU?
Thanks!
My training process outputs multiple model filel types:
Beta Was this translation helpful? Give feedback.
All reactions