Replies: 4 comments
-
Hi @Sebagam , Could you please help share your full test program for further analysis? Thanks. |
Beta Was this translation helpful? Give feedback.
-
The input to our pre-processing transforms is
|
Beta Was this translation helpful? Give feedback.
-
It's [batch size, channels, dim1, dim2, dim3], right? |
Beta Was this translation helpful? Give feedback.
-
Nifti images don't have batch components, they have the shape loader = LoadImage()
for filename in list_of_filenames: # train, validation and test data
im, meta = loader(filename)
print(meta["dim"]) It's important to know the variation in shape of your data to choose your transforms accordingly. c.f.: info on nifti dimensions: https://nifti.nimh.nih.gov/nifti-1/documentation/nifti1fields/nifti1fields_pages/dim.html/document_view |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I'm using two different nifti file datasets, I apply the same transform to them:
Compose([AsChannelFirst(), Resize(spatial_size = [250, 250, 170]), ScaleIntensity(), EnsureType()])
For one group of files I get this tensor:
<class 'torch.Tensor'> torch.Size([2, 1, 250, 250, 170]) tensor([1, 1])
While for the other, I get:
<class 'torch.Tensor'> torch.Size([1, 160, 250, 250, 170]) tensor([1])
What is the issue with the output from the second group and how I can fix it?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions