I didn't get a reasonable speed-up when applying depthwise convolution to VGG16 #8832
Replies: 7 comments
-
|
Can you give me some suggestions? I was trapped by it. @piiswrong @crazy-cat |
Beta Was this translation helpful? Give feedback.
-
|
which version of mxnet did you use ? |
Beta Was this translation helpful? Give feedback.
-
|
mxnet.version = '0.11.0' @edmBernard |
Beta Was this translation helpful? Give feedback.
-
|
Depthwise convolution support has been merge the 16 August #7393 |
Beta Was this translation helpful? Give feedback.
-
|
I remember that i install mxnet after the 16 August. And i calculated the mult-adds in theory as follow: |
Beta Was this translation helpful? Give feedback.
-
|
Proposed Labels: "Performance", "CUDA", "Question" |
Beta Was this translation helpful? Give feedback.
-
|
Hi @lawrencewxj @edmBernard The following paper conducted an experiment on this (see Table 2). Recent cuDNN-Versions are improving the performance for group-convolutions: It could be good to verify that MXNet makes use of all these recent optimizations. Best, |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Description
It should speed up for about 8x in theory,but it seems get a little bit lower in practice. Is there something wrong in my code? @piiswrong @crazy-cat
the original VGG convolution:
after apply depthwise convolution
the original VGG convolution speed:
after apply depthwise convolution speed:
Hardware :
K80
Software :
cuda8.0 + cudnn5.1
Beta Was this translation helpful? Give feedback.
All reactions