error trying to inference on numpy data without ever converting to nifti #6564
-
Hi, I have a trained RetinaNet3D MONAI model and am trying to set up an inference pipeline in which I never actually have to create a nifti file. I start with my data in a 3D numpy array so in theory this should be possible. In fact, I am able to get the correct image tensor shape and values by doing this:
where data_zoom is my starting 3D numpy array. I then run this for inference:
but unfortunately get this error:
Note that my code works and inferences properly if I use the nifti and DataLoader module instead, as is done in the tutorial. The following code works properly:
Fascinatingly, the input tensors are equivalent despite one causing the error and the other working. For both the following:
I get the following:
and this equation returns TRUE:
One difference I do see with these tensors is that the type of data[0] is torch.Tensor and the type of inference_inputs[0] is monai.data.meta_tensor.MetaTensor. Does anyone know what is causing this error, or if there is a better way to go directly from a numpy array to inferencing without having to convert to nifti format in between? Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
Hi @AceMcAwesome77, I didn't read them all, but my assumption is that the absence of a batch dimension is the cause of your error. To check, you can simply add a dimension in front of the data. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the reply - but if that was the case, how could the tensors between my working code and broken code be equal then? See that last part about the torch.equal check. |
Beta Was this translation helpful? Give feedback.
-
I just solved this actually - the problem was that the tensor in data[0] was dtype uint8 and the working tensor was dtype float32. Even though they are different datatypes the tensors were showing up as equal because they had the same values. My problem was solved by adding this simple .float() line:
|
Beta Was this translation helpful? Give feedback.
I just solved this actually - the problem was that the tensor in data[0] was dtype uint8 and the working tensor was dtype float32. Even though they are different datatypes the tensors were showing up as equal because they had the same values. My problem was solved by adding this simple .float() line: