Some doubt in backpropagation of PointNet++ while solving it manually. #8634
Unanswered
utkarsh0902311047
asked this question in
Q&A
Replies: 2 comments 2 replies
-
Sorry, I have a hard time to map your comments to the code blocks in the example. Which input has shape |
Beta Was this translation helpful? Give feedback.
2 replies
-
Please help with some idea/solution |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
In PointNet++ model there are three SetAbstraction layers (each having three convolutional, Batchnorm and ReLU layers) and in last two the number of channels are increased by 3.
I am trying to do backpropagation manually to understand how it actually trains.
I am stuck in the third SetAbstraction layer's first convolutional layer. Here the gradient coming from backpropagation has shape (BatchSize,256,128,1), the input to this first convolutional layer is the output from the second SetAbstraction after doing max operation and increasing the channels by 3 which is of shape (BatchSize, 259,128,1). The weights of this convolutional layer has shape (256,259,1,1). Now when I try to find this convolutional layer weight gradients, it comes correct with shape of (256,259,1,1). But for input gradients the shape comes out to be (BatchSize,259,128,1). But the shape of the output of third ReLU of second SetAbstraction is (BatchSize,256,64,128) and its max operation leads to shape (BatchSize,256,128). Now how should I carry the gradient I calculated back through max operation and then relu operation as its shape is (BatchSize,259,128,1).
Please help me with this step. Thank you
Beta Was this translation helpful? Give feedback.
All reactions