CUDA out of memory. when python tools/train.py on gpu #434
-
python tools/train.py --model_config_path anomalib/models/patchcore/config_cloth.yaml --model patchcore RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 7.79 GiB total capacity; 5.42 GiB already allocated; 23.50 MiB free; 5.43 GiB reserved in total by PyTorch) |
Beta Was this translation helpful? Give feedback.
Replies: 7 comments
-
try: |
Beta Was this translation helpful? Give feedback.
-
still not solved |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
Running into the same issue even though the train batch size is 1 and the coreset sampling ratio is 0.01 while using PatchCore.
|
Beta Was this translation helpful? Give feedback.
-
train batch size 1 means 100% of the data, try batch size 0.1 |
Beta Was this translation helpful? Give feedback.
-
The default values were 32. So when you say to change the batch size to 0.1, it doesn't make sense. Batch size is usually the number of images that go into the gpu. |
Beta Was this translation helpful? Give feedback.
-
Ah, I'm talking about the hyperparameter limit_train_batches. |
Beta Was this translation helpful? Give feedback.
Ah, I'm talking about the hyperparameter limit_train_batches.
This will decrease the number of Training Samples and thus reduce Memory consumption. But anyways I guess 3GB RAM is just a bit too small training patchcore in a reasonable way with lots of data