Replies: 1 comment 2 replies
-
Hi @Zrrr1997, |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Dear MONAI team,
first of all, thanks for your wonderful work and support! I am currently working on a PET/CT dataset and am observing a dramatic change in the inference/training time when I add
Random
transforms to my pipeline.Details about my configuration
The PET and CT volumes are registered and have an average size of
(400, 400, 128)
and there are around400
training volumes. I am using anA100 NVIDIA GPU
.My training transforms pipeline is as follows:
Using these transforms leads to a large time overhead - around 45-75 minutes for a single epoch. However, removing all
RandFlip, RandAffine, RandRotate
transforms leads to epoch times of only 6 minutes. Additionally, I am using aCacheDataset
to load the data and a standardInferer
(no sliding window applied).I was wondering what are the best practices when using random transforms. Am I using the wrong
Dataset
class? Is the order of the transforms not optimal? Or are such inference/training times normal for such heavy transforms + amount of data.I would gladly read more about this, or would be happy if anyone can point me in the right direction!
Best regards,
Zdravko
Beta Was this translation helpful? Give feedback.
All reactions