-
Thank you for this excellent library. I have been using MONAI for both 2D and 3D medical image segmentation and both tasks have been successful. For a project, I performed a 2D segmentation and the original image size was (1164x873). In order to fit the images into the network, I had to use the "Resized" function in the transform to resize the images to (896x896), and I trained the model and predicted the test images successfully. Now, I need to resize the test images back to their original shapes for further processing steps. Is there an inverse transform function for 2D images that I can use to run predictions on the original images, similar to what we do for 3D volumes using "transforms.Invertd"? Alternatively, do you have any other solution to help me resize the images back to their original shapes? Note that I used the OpenCV "resize" function, but unfortunately, the prediction and input images do not overlap correctly. I would appreciate your help and time. Here is my transform function for test data:
Here is the prediction and saving the predicted masks:
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi @sepidhk, I think you could do the same thing just like what you do in 3d data. Most transforms don't have any distinction between 2d and 3d data. And for more details, you could refer to this tutorial which using a 2D dataset: Hope it can help you, thanks! |
Beta Was this translation helpful? Give feedback.
Hi @sepidhk, I think you could do the same thing just like what you do in 3d data. Most transforms don't have any distinction between 2d and 3d data. And for more details, you could refer to this tutorial which using a 2D dataset:
https://github.com/Project-MONAI/tutorials/blob/main/modules/inverse_transforms_and_test_time_augmentations.ipynb
Hope it can help you, thanks!