Skip to content
Discussion options

You must be logged in to vote

Hi, thanks for pointing this out.

The short answer is: You need to pass an eval_transform to the datamodule before training, which defines how the images should be transformed during inference. In line with the training transforms in your example, the following would be sufficient:

transforms.Compose([
    transforms.Resize(size=(292,292)),
    transforms.CenterCrop(size=(256,256))
])

Note that it would be better to also normalize the input images (both train and eval) to ImageNet statistics, because Padim's pre-trained backbone was trained on ImageNet:

transform.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])

When only the train_transform is specified, Anomalib will re-…

Replies: 3 comments

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Answer selected by samet-akcay
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants