Different inference output of trainer.predict() and model(image) #599
-
I trained a Padim model for anomaly detection and want to use a custom script for inference.
I thought that both options must return the same output but I get different ones.
The anomaly maps for the same image with both options are identical. Why is the second option not showing the same key/value pairs? Can someone explain this behaviour to me? Am I doing something wrong? # loading Padim model
model = PadimLightning.load_from_checkpoint("model.ckpt")
model.eval()
# 1st option: use trainer.predict()
trainer = Trainer()
dataset = InferenceDataset(
"rgb_17.png", image_size=tuple([300,300])
)
dataloader = DataLoader(dataset)
output = trainer.predict(model=model, dataloaders=dataloader)
# output contains the following information : image, image_path, anomaly_maps, pred_scores, pred_labels, pred_masks
# 2nd Option: use model(input)
image = Image.open("rgb_17.png")
image = image.convert('RGB')
image_array = np.asarray(image)
transforms = A.Compose(
[
A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
ToTensorV2(),
]
)
image_trans = transforms(image=image_array)
output2 = model(image_trans['image'].unsqueeze(0))
# output2 just contains the anomaly map |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 5 replies
-
@tominator95 The lightning model sits on top of the torch model. The output of the torch model is the anomaly map. In the base anomaly lightning model (https://github.com/openvinotoolkit/anomalib/blob/d80d37db79bee58ff69fe856fac8579f7c6340ac/anomalib/models/components/base/anomaly_module.py#L82) we add the masks, labels, and scores etc to the outputs of the torch model. This is because when we export the torch model for deployment (onnx, openvino, etc), the anomaly map generation can be frozen with the model graph. The other computations can always be done on top of the these outputs. Our Let me know if this clears things up. |
Beta Was this translation helpful? Give feedback.
-
Hello! I used onnxruntime to reason about onnx model and got the output as an anomaly map of [1, 1, 256, 256], but I want to know if the current image is an anomaly or a person normal sample what should I do? Is there code to accomplish this? |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
@tominator95 The lightning model sits on top of the torch model. The output of the torch model is the anomaly map. In the base anomaly lightning model (https://github.com/openvinotoolkit/anomalib/blob/d80d37db79bee58ff69fe856fac8579f7c6340ac/anomalib/models/components/base/anomaly_module.py#L82) we add the masks, labels, and scores etc to the outputs of the torch model. This is because when we export the torch model for deployment (onnx, openvino, etc), the anomaly map generation can be frozen with the model graph. The other computations can always be done on top of the these outputs.
Our
tools/inference/torch_inference.py
script (https://github.com/openvinotoolkit/anomalib/blob/d80d37db7…