-
I am trying to do inference of a 3D UNet model on a very large image. Because the image is large, I need to use sliding window inference. But, I am not sure how to best use this function. Basically, even reducing the sliding window patch size to a very small one, I still get out-of-memory issue. (I am using A100 GPU with 40GB memory) Here is my code to reproduce this issue. Is there anything I am doing wrong?
What I got was as follows:
Any suggestion on any workaround? Many thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Okay .... Just find the solution, setting |
Beta Was this translation helpful? Give feedback.
Okay .... Just find the solution, setting
device=torch.device('cpu')
will do the job.