Using default parameters and all 300 categories, I feel its training quite slow even though I am using a AWS EC2 P2.xLarge instance with Nvidia K80 GPU.
Its using only 360MB of the GPU and I feel as if its stuck on that usage, its not using more or less from that number (checked via nvidia-smi command)
I tried calculating the time between each iteration and its 5-7 seconds, and calculating the total iterations with that time and 20 epochs, its results in more than 150 days.

