Replies: 1 comment 1 reply
-
perhaps you can consider a lower precision on CPU, for example |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, MONAI team!
Thanks for the convience your software provides so far!
I tried to develop an algorithm for segmentation of abdomen organs to a big dataset under docker. But it comes to an memory error when I infer a test volume with
slide_window_inference
, a 512x512x491 CT volume in nifti format.I have enough gpu memory(48G, A6000), but I just have 28G cpu memory, which is limited by the competition organizer. So, I want to make full use of gpu, while use cpu memory as little as possible.
I have put both the data and model into cuda by the following code:
But it seems that the peak cpu memory is more than 28GB, while the gpu memory is around 10GB.
What should I do next? Looking forward for your help!
Beta Was this translation helpful? Give feedback.
All reactions