Replies: 1 comment 1 reply
-
sure, the sliding window method takes in this case you can set model = mode.to(device='cuda:0')
val_inputs = val_inputs.to(device="cpu")
val_outputs = sliding_window_inference(val_inputs, cfg.roi_size, cfg.sw_batch_size, model, device="cpu", sw_device="cuda:0") |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
The size of mu input is 512x512x256, and i trained my 3D-UNet with rand crop 160x160x64,my aim is to segment 15 object ,so my outputs is 16x160x160x64,but i find that i can train my network with low gpu memory(4G), but when infer with Sliding Window Inference , the gpu memory will rise to 24G and sometimes exceed the total memory, which really confused me a lot part of my code is below:
if cfg.val_amp is True:
with autocast():
val_outputs = sliding_window_inference(val_inputs, cfg.roi_size, cfg.sw_batch_size, model)
else:
val_outputs = sliding_window_inference(val_inputs, cfg.roi_size, cfg.sw_batch_size, model,)
where cfg.roi_size=(160,160,64) and cfg.sw_batch_size=1,
is there any solution to reduce the gpu memory?
thanks in advance
Beta Was this translation helpful? Give feedback.
All reactions