Are both the images and masks/segs transformed identically in the 2d_segmentation tutorials? #3960
-
I am currently experimenting with the torch codes on 2d-segmentation (from the Project-MONAI/tutorials repository). There is one general part in these codes that confuse me a bit though.
So according to the code, the sequences of various transform-operations are executed on the training images independently from the sequences of transforms which are executed on the segmentation masks (of those images). But I mean doesn't it logically make sense if every training image and its corresponding segmentation masks are both transformed identically to be used in the UNet, or is that actually indeed what the program does at
? Has this maybe also something to do with the notorious Someone please help me out on this issue, I'd appreciate any response! Thanks in advance! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi @GamblerEd ,
Yes, for all
Yes, let me take the transform
What we need to check here is to see if
You can also run the following code to confirm if two
|
Beta Was this translation helpful? Give feedback.
Hi @GamblerEd ,
Yes, for all
Randomizable
transforms, same seed will be used (see here)Yes, let me take the transform
RandShiftIntensity
for instance.When trying to load data from
ArrayDataset
, the__getitem__
function will assign the same seed for all transforms (as shown in the link posted above). The linetransform.set_random_state(seed=self._seed)
is used to setself.R = np.random.RandomState(_seed)
.If we check the source code of
RandShiftIntensity
, we will see its random place is to randomly s…