Segmentation of one dataset with 4 different CNN models (UNet, DynUNet, SegResNet, and VNet) #6707
-
Hello, I'm a novice in AI and image segmentation. model = UNet(
spatial_dims=3,
in_channels=1,
out_channels=3,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
norm=Norm.BATCH,
).to(device)
loss_function = DiceLoss(include_background=False, to_onehot_y=True, softmax=True)
optimizer = torch.optim.Adam(model.parameters(), 1e-4) and used this tutorial here for running the trainer. Thus far, the results have been satisfactory on the UNet front! However, I also wanted to use the same dataset with the same dataloader, but with a different network model in this framework, starting with DynUNet. To achieve this, I changed the model to: model = DynUNet(
spatial_dims=3,
in_channels=1,
out_channels=3,
kernel_size=(3, 3, 3),
strides=(2, 2, 2),
upsample_kernel_size=(2, 2, 2),
deep_supervision=False,
filters=(2, 2, 2),
dropout=0.2,
norm_name="instance",
act_name="leakyrelu",
deep_supr_num=1,
res_block=True,
trans_bias=True,
)
lloss_function = DiceLoss(include_background=False, to_onehot_y=True, sigmoid=True)
optimizer = torch.optim.Adam(model.parameters(), 1e-4) But now I get this error while running the same trainer code: AssertionError: ground truth has different shape (torch.Size([2, 2, 256, 256, 256])) from input (torch.Size([2, 2, 128, 128, 128]))
Thanks in advance, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hi @Kiarashdnsh, thanks for your interest here.
Hope it can help you, thanks! |
Beta Was this translation helpful? Give feedback.
Hi @Kiarashdnsh, thanks for your interest here.
In your case, you downsample the image twice and upsample only once, so your output shape is different from the label. You can simply update the stride to (1, 2, 2).
And for more details about how to set the parameters of the
DynUNet
, you can refer toMONAI/monai/networks/nets/dynunet.py
Lines 70 to 85 in 17c1e3a