-
The network |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 9 replies
-
Hi @YerePhy, Thank you for providing more information. I tried to reproduce your question. import monai
import torch
net = monai.networks.nets.DynUNet(2, in_channels=3, out_channels=3,
kernel_size=[3, 3, 3, 3], strides=[1, 2, 2, 2],
upsample_kernel_size=[2, 2, 2, 2], filters=[64, 64, 64, 64],
deep_supervision=True, deep_supr_num=2)
rand = torch.randn((1, 3, 72, 72))
print('the net outputs shape:', net(rand).shape)
# the net outputs shape: torch.Size([1, 3, 3, 72, 72]) your problems seem is the dimensions are different. As the DynUNet docs said,
Maybe you can take a look at the following code(It is very unregulated code). rand = torch.randn((1, 3, 72, 72))
net_rand = net(rand)
print('the net outputs shape:', net_rand.shape)
rand_unbinds = torch.unbind(net_rand, 1)
print('the net unbinds shape:', [rand_unbind.shape for rand_unbind in rand_unbinds])
criterion = monai.losses.DiceLoss()
loss = torch.mean(torch.FloatTensor([criterion(rand_unbind, torch.zeros_like(rand_unbind)) for rand_unbind in rand_unbinds]))
print('loss value:', loss) Hope this is useful to you. |
Beta Was this translation helpful? Give feedback.
-
Solution: |
Beta Was this translation helpful? Give feedback.
Solution:
monai.networks.nets.DynUNet
behaves different in evaluation mode (e.g..eval()
) in that mode won't ouput the outputs of the supervision heads. So be careful because you will get different shapes intraining_step
andvalidation_step
(where.eval()
is applied under the hood).