Choosing right default batch size for pathcore based on gpu memory configuration #1251
-
In the If I want to use custom data format based model patchcore algorithm, by default it can chose batch size 2 for training and testing. But if I would like to increase the testing batch size to 4 what part in the config file needs to be updated to use 100% of test dataset with custom batch size based on the gpu memory usage? I tried to use batch size 4 in config file Do I have to change any script if I wanna use only gpu for testing? I mean the value of TIA |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
@udasinnayan, apologies for my late reply to this. To change the batch size of validation, test and predict dataloaders, you need to configure Regarding the |
Beta Was this translation helpful? Give feedback.
-
Let us know if you have any other follow-up questions. Thanks! |
Beta Was this translation helpful? Give feedback.
@udasinnayan, apologies for my late reply to this. To change the batch size of validation, test and predict dataloaders, you need to configure
eval_batch_size
parameter in the config file.https://github.com/openvinotoolkit/anomalib/blob/cec86bfb1174c7c1cedd39e8b900384e41553b41/src/anomalib/models/patchcore/config.yaml#L8
Regarding the
limit_<split>_batches
params inTrainer
, they are more for debugging purposes, which subsamples the corresponding dataloader. For more information, you could refer to this link:https://lightning.ai/docs/pytorch/stable/common/trainer.html#limit-train-batches