Sliding window inference for large 3D volumes? How to fit into memory? #3497
-
Hi, I want to run sliding window inference on big 3D microscopy images. My volumes have sizes of approximately (there is a bit of variance:
Obviously I cannot fit such a large image on the GPU at once, hence I thought using sliding window inference would be a good idea. My current try inspired by the 3D spleen tutorial:
I set the batch_size to 1 for experiments. The network is trained on The code crashes with:
From my naive understanding sliding window inference is exactly meant to fit such huge cases into memory? But I see that the code is expected to crash when the whole input tensor is sent to the CUDA device. How should I continue? Obviously, I could try to slice my images manually into "digestible" patches with overlap and save the network outputs. And then reassemble the network outputs again based on my slicing grid. But such a sliding window inference of sliding window inferences seems counterintuitive to me? Thanks for your help! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
Yes, the
so that only the |
Beta Was this translation helpful? Give feedback.
-
Thanks for the quick response. Okay, if I understand you correctly I should initialize like this?
And remove the |
Beta Was this translation helpful? Give feedback.
Yes, the
SlidingWindowInferer
provides options ofsw_device
anddevice
. to address the memory issue I think you can do something like:inputs.to('cpu')
model.to('cuda')
SlidingWindowInferer(..., sw_device='cuda', device='cpu', ...)
so that only the
model(window_data)
part runs on cuda device, the rest is with cpu.