MONAI sliding_window_inference causes "run out of shared memory" issue #8073
Unanswered
dongdongtong
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, there, I am using MONAI for a while and I am encountering a tough problem depicted in the title. I am not sure that it is a potential bug or just a feature of MONAI.
My environment is a container (docker) image, with 4GB shared memory and a large RAM memory (~700G). My task is just a hemorrhage segmentation using head NCCT. My dataloading pipeline is very common (LoadImage->EnsureChannelFirst->Oritate to RAS->ResizeWithPadOrCropd->ScaleIntensity to [0, 1]). Here, the final image size is [416, 416, 32] with [W, H, D] layout.
In this setting, I can only set the batch size of inference to 2, any increasing will cause the shared memory issue. However, increasing the num workers of the dataloader is OK. I googled in the internet and found that appending tensor to the python list will cause the memory leak issue, which may be related to my problem. For some reasons, I cannot increase the size of shared memory.
My sliding window setting is just as follows:

Beta Was this translation helpful? Give feedback.
All reactions