You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi I am trying to train a Deeplab semantic segmentation model in detectron2. However, I keep getting CUDA out of memory errors. My images are 1024x1024 which, for object detection, is fairly standard.
My machine has two RTX 2080 Ti GPUs, each with 11GB memory.
I don't think 1024x1024 is large, so before I start reducing image size I wanted to ask if there is anything else that could be generating these CUDA out of memory errors.
The configuration is the same as in detectron2/projects/DeepLab/ except for
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi I am trying to train a Deeplab semantic segmentation model in detectron2. However, I keep getting CUDA out of memory errors. My images are 1024x1024 which, for object detection, is fairly standard.
My machine has two RTX 2080 Ti GPUs, each with 11GB memory.
I don't think 1024x1024 is large, so before I start reducing image size I wanted to ask if there is anything else that could be generating these CUDA out of memory errors.
The configuration is the same as in detectron2/projects/DeepLab/ except for
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1
cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES = 1
As there is only one class (I am trying to segment foreground from background).
EDIT: actually with cfg.SOLVER.IMS_PER_BATCH = 2 I get a different error message:
ValueError: pool_kernel_size must be divisible by the shape of inputs. Input size: torch.Size([32, 32]) pool_kernel_size: (32, 64)
Beta Was this translation helpful? Give feedback.
All reactions