Slower inference/testing with EfficientAD than promised. #1183
-
Hey, i read the paper from EfficientAD and was blasted by the speed of this networks inference. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 10 replies
-
Hi @Palettenbrett, just a note that the link you provided is not the official source code. And author of that repo, @nelson1425, is also one of the contributors of EfficientAD implementation of anomalib. The main contributor is @alexriedel1. I'll let them reply to your question in their free time |
Beta Was this translation helpful? Give feedback.
-
Hi @Palettenbrett, to test the inference speed of EfficientAD in anomalib you should just measure the timing for model inference in eval mode. You could do something like this for example:
|
Beta Was this translation helpful? Give feedback.
-
I am using the command to train the efficient_ad model using custom data set.
It could create the *.ckpt However, when i run TorchInferencer
It shows the error: File C:\anomalib\src\anomalib\deploy\inferencers\torch_inferencer.py:43, in TorchInferencer.init(self, path, device) File C:\anomalib\src\anomalib\deploy\inferencers\torch_inferencer.py:88, in TorchInferencer.load_model(self, path) KeyError: 'model' |
Beta Was this translation helpful? Give feedback.
Hi @Palettenbrett, to test the inference speed of EfficientAD in anomalib you should just measure the timing for model inference in eval mode. You could do something like this for example: