Skip to content

Training too slow and not using full GPU, whats the training time ? #2

@shubhank008

Description

@shubhank008

Using default parameters and all 300 categories, I feel its training quite slow even though I am using a AWS EC2 P2.xLarge instance with Nvidia K80 GPU.

Its using only 360MB of the GPU and I feel as if its stuck on that usage, its not using more or less from that number (checked via nvidia-smi command)

I tried calculating the time between each iteration and its 5-7 seconds, and calculating the total iterations with that time and 20 epochs, its results in more than 150 days.

MobaXterm_XVT7ysE4H5

MobaXterm_QtHda7g7pV

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions