GPU out of memory using PatchCore e PaDim #2752
-
Hi, |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 1 reply
-
Hi @LorenzoF6 , |
Beta Was this translation helpful? Give feedback.
-
Hi, the code is very simple eng_patchcore = Engine(default_root_dir = 'risultati')
md_patchcore = Patchcore(evaluator = ev_SUOLE)
datamodule_SUOLE_train = Folder(
name = "SUOLE",
root = dataset_path_SUOLE,
normal_dir = "train/good",
normal_test_dir = "test/good",
train_batch_size = 2,
num_workers = 1
abnormal_dir = "test/0",
mask_dir = "mask/0",
)
datamodule_SUOLE_train.setup()
eng_patchcore.fit(model = md_patchcore, datamodule = datamodule_SUOLE_train) I'm running on a notebook installed on a docker with 8 GB of GPU. |
Beta Was this translation helpful? Give feedback.
-
@LorenzoF6, batch size is not effective to avoid oom issue when training patchcore. The reason for oom is patchcore is a memory bank-based model where it combines all the features into a memory bank. If your dataset is large, the bigger memory bank, hence more susceptible to oom. The following are the parameters you could play with to reduce the dimension of the embedding you are extracting and applying dimensionality reduction anomalib/src/anomalib/models/image/patchcore/lightning_model.py Lines 138 to 142 in ee5d986 Since this is not a bug, I'm moving this to a Q&A. Note, however, is that optimizing patchcore for a more efficient memory bank approach is on our roadmap. |
Beta Was this translation helpful? Give feedback.
-
Thansk, so i suggest that if i have limited GPU resources, and i think that modify the parameters not change a lot if i want to use my entire dataset, a possible strategy is to subsample it, reducing to 200/300 images and than test it, correct? |
Beta Was this translation helpful? Give feedback.
This comment was marked as spam.
This comment was marked as spam.
-
Hi, This should reduce the chance of oom significantly. |
Beta Was this translation helpful? Give feedback.
Hi,
We recently did some memory optimisation for Patchcore :
#2813
This should reduce the chance of oom significantly.