Select only 2D slices that contain label from 3D volume #5948
-
Hi,
But I encounter the following error:
So I'm guessing I'm doing something wrong. Does anyone have any suggestions on how to achieve this? Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 18 replies
-
Hi @valecorbetta, I noticed that you used Hope it can help you, thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi, I am also trying to select only slices that have non-empty labels. When trying the code you proposed it works perfectly well (ie: only non-empty label slices are selected): train_transforms = Compose(
[
LoadImaged(keys=["image", "label"]),
EnsureChannelFirstd(keys=["image", "label"]),
ScaleIntensityRanged(keys=["image"], a_min=0, a_max=1, b_min=0.0, b_max=1.0, clip=True),
Spacingd(keys=["image", "label"], pixdim=(0.05, 0.05, 0.05), mode=("bilinear", "nearest")),
RandCropByPosNegLabeld(
keys=["image", "label"],
image_key="image",
label_key="label",
spatial_size=(200, 200, 1),
pos=1,
neg=0,
num_samples=1,
),
]
) However, when attempting the same thing using a different leading method as per this tutorial with the # Volume-level transforms for both image and segmentation
train_transforms = Compose(
[
LoadImaged(keys=["image", "label"]),
EnsureChannelFirstd(keys=["image", "label"]),
ScaleIntensityd(keys="image"),
EnsureTyped(keys=["image", "label"]),
]
)
# 3D dataset with preprocessing transforms
volume_ds = CacheDataset(data=train_files, transform=train_transforms)
# Volume to patch processing
patch_func = PatchIterd(
keys=["image", "label"], patch_size=(None, None, 1), start_pos=(0, 0, 0) # dynamic first two dimensions
)
patch_transform = Compose(
[
SqueezeDimd(keys=["image", "label"], dim=-1),
RandCropByPosNegLabeld(
keys=["image", "label"],
label_key="label",
spatial_size=(200, 200),
pos=1,
neg=0,
num_samples=1,
),
]
)
patch_ds = GridPatchDataset(
data=volume_ds, patch_iter=patch_func, transform=patch_transform, with_coordinates=False
)
shuffle_ds = ShuffleBuffer(patch_ds, buffer_size=30, seed=0)
train_loader = DataLoader(patch_ds,
batch_size=config['train_batch_size'],
num_workers=config['num_workers'],
pin_memory=torch.cuda.is_available()) I cannot wrap my head around what I am doing wrong in the 2nd scenario. Also, related question: what are the benefits of going with the |
Beta Was this translation helpful? Give feedback.
Hi @valecorbetta, I noticed that you used
num_samples
in bothRandCropByPosNegLabeld
andRandSpatialCropSamplesd
. After your transforms, it will create a nested list that contains 32*96 samples for each image.IMO, you could simply remove
RandSpatialCropSamplesd
and changed theroi_size
inRandSpatialCropd
to be (96, 96, 1).Hope it can help you, thanks!