Skip to content

Very slow label smoothing with large input size #93

@Levishery

Description

@Levishery

Thank you very much for your contributions! :)

I'm implementing MALA's network in this pipeline. It saves memory by using convolution without padding, therefore can afford a larger input size during training (for example [64, 268, 268] with batch size 4 on a single GPU).

However, the data loading time became unaffordable under this input size, where 90% of the time is spent on data-loading. I found that this is caused by SMOOTH, the post-process of the label after augmentation.

I wonder if you are aware of this? Will discarding smooth influence training much?

Merry Christmas :)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions