Replies: 1 comment
-
Hi @ahxmeds, thanks for your interest here. You could find that your "Forward/backward pass size" is much larger than MONAI's. Finally, the main reason is that MONAI downsamples in the first conv. Hope it helps, thanks! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I am using two UNets and comparing their GPU memory usage when I just run
output = model(input)
. The first UNet is a custom UNet (UNetCustom
) which contains about 1.3 million trainable parameters and is defined below. This UNet architecture comes from the following paper: https://ieeexplore.ieee.org/document/8581448The second UNet is the MONAI UNet
from monai.networks.nets import UNet
which contains about 19.3 million trainable parameters and is defined as follows:I am using an NVIDIA V100 GPU with 16 GiB RAM. Even though the number of parameters are more for
unet_monai
as compared tounet_custom
, when doing a forward pass using an input tensor of size[1, 1, 256, 256, 256]
,unet_custom
takes up close to 15.5 GiB RAM, whileunet_monai
takes up only 9-10 GiB RAM. I am currently just testing this custom UNet and would like it to be able to use input of max input size[1, 1, 256, 256, 256]
.What's the difference between the MONAI implementation of UNet vs. my custom implementation? Does anyone have any insights on where could this difference in memory usage be coming from?
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions