Replies: 1 comment
-
Hi @ahxmeds, you can use
Hope it can help you, thanks! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
Is there a way to set up multimodal training in a dictionary-based format where I am ALSO able to perform transformations such as random cropping of images (for example
RandCropByPosNegLabeld
)? I have PET (PT) and CT images with binary ground truth masks (GT). I want to concatenate the cropped portions (ofspatial_size
say(96, 96, 96)
) of PT and CT together along the axis=0 (channel dimension), so the final input to the network must have the shape(B, 2, 96, 96, 96)
, whereB
is the number of batches. From the tutorials, I can understand how to concatenate two images along the channel dimension, and I also understand how to perform random cropping but I can't seem to find any way to do both together.I am setting up my dictionary of filenames like this:
And after that, I am defining the
train_transform
in this way:This technically gives
num_sample=4
cropped regions for each (CT, PT, GT) triplet. Is there a way to concatenate every individual cropped CT and PT portion from this list of 4 regions within the definition oftrain_transforms
itself? I would really appreciate it if someone could give me some pointers in the right direction. Thank you so much!Beta Was this translation helpful? Give feedback.
All reactions