improve demo/mnist dataProvider speed#713
Conversation
|
Hi @reyoung |
|
With GPU enabled, with VGG model, the dataprovider was not bottleneck. The time of TrainBatch almost be equal to that of forwardBackward, and the that of PyDP2.getNextBatchInternal is hidden by buffer. So use cache with RAM does not work for this model. Following stat are from the second 100Batches For Cpu enabled, the data provider will not be the bottleneck, since the CPU is slower than GPU for vgg model, so the CPU will be busy for handling forwardbackward. Maybe you should reduce the model complexity to improve some speed if necessary. |
|
But I also agree with @reyoung , the numpy could be better for handling input data instead of handwritten code. :-) |
|
The |
|
@backyes |
b0945d1 to
ff4e046
Compare
|
@reyoung I have used pre-commit to format the code. Please have a look and let me know if there are other problems. Thanks~ |
…addlePaddle#713) * update catalog name to match item name * update catalog name to match item name in windows source compilation
Co-authored-by: changwenbin <changwenbin@baidu.com>
In response to issue #688 , change the dataProvider in MNIST demo to speed up the training.