Cuda Out of memory error while try train patchcore with gpu #2689
Replies: 2 comments
-
To fix the out of memory error you can reduce the batch size like this: datamodule = Folder(
name=...,
root=...,
normal_dir="good",
abnormal_dir="defected",
train_batch_size=32,
eval_batch_size=32,
) Also if you want to export models, use from anomalib.deploy import ExportType
engine.export(model, export_type=ExportType.OPENVINO)
engine.export(model, export_type=ExportType.TORCH)
engine.export(model, export_type=ExportType.ONNX) |
Beta Was this translation helpful? Give feedback.
-
Hello, with PatchCore, the required memory depends on the number of training images and their resolution. To avoid this error, you can reduce the number of training images or train on CPU instead.
For a quick overview, you can try the documentation. For a better understanding of how PatchCore works, I recommend their paper. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have tried to train patch core with gpu but got these error. I wonder how to edit param like batch size to reduce that? Where should i read about setting config params for training and predict?

This is terminal output when problem occur.
INFO:anomalib.models.image.patchcore.lightning_model:Applying core-set subsampling to get the embedding.
Selecting Coreset Indices.: 100%|███████████████████████████████████████████████████████████████████████████| 22892/22892 [02:28<00:00, 153.75it/s]
Backend qtagg is interactive backend.
Turning interactive mode on. | 0/5 [00:00<?, ?it/s]
This is my code
Beta Was this translation helpful? Give feedback.
All reactions