-
Notifications
You must be signed in to change notification settings - Fork 80
Open
Description
Thank you very much for your contributions! :)
I'm implementing MALA's network in this pipeline. It saves memory by using convolution without padding, therefore can afford a larger input size during training (for example [64, 268, 268] with batch size 4 on a single GPU).
However, the data loading time became unaffordable under this input size, where 90% of the time is spent on data-loading. I found that this is caused by SMOOTH, the post-process of the label after augmentation.
I wonder if you are aware of this? Will discarding smooth influence training much?
Merry Christmas :)
Metadata
Metadata
Assignees
Labels
No labels