Skip to content

slow training in single GPU #25

@guanamusic

Description

@guanamusic

Huge thanks for implement! I have a question regarding the training time in the single GPU you mentioned.
I did the same training procedure in batch size 96 on the RTX 2080Ti GPU as you did, but it took a lot longer than the training time you mentioned (12hrs to ~10k training iterations).
I have no idea the cause of this issue at all. Could you explain your training environment precisely?

Please refer to my working environment at the bottom.
Docker environment with
CUDA 10.1
cuDNN v7
ubuntu 18.04
python 3.8

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions