GlobalAveragePoolGrad node does not appear to be assigned an op_name #6855
-
I have been adding training for the dnnl endpoint. I have been working on Pooling operators. I ran into issues when working on GlobalAveragePoolGrad. The dnnl endpoint relies on knowing the name of the node to make decisions about that node. So far all of the nodes we have worked on have been named. For example, MaxPool for the inference op and MaxPoolGrad for the gradient training op. When trying to add GlobalAveragePoolGrad I ran into an issue the op_name appears to be left empty. Leaving me in a situation that I am unable to properly identify. Looking at the implementation of the gradient builder (gradient_builder.cc lines 1213-1245) it looks like GlobalAveragePool may still be a work in-progress.
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
I think not all gradient operators correspond to a single macro-op. In some cases the gradient is implemented via combination of individual ops. E.g. for matmul there's no matmulgrad, it's just lowered to matmul+transpose. |
Beta Was this translation helpful? Give feedback.
-
GlobalAveragePool GradientBuilder is indeed an incomplete implementation. Are you planning to write a op and kernel for GlobalAveragePoolGrad? |
Beta Was this translation helpful? Give feedback.
I think not all gradient operators correspond to a single macro-op. In some cases the gradient is implemented via combination of individual ops. E.g. for matmul there's no matmulgrad, it's just lowered to matmul+transpose.