In the pointnet Layer how does the message passing actually works? #8169
Unanswered
utkarsh0902311047
asked this question in
Q&A
Replies: 1 comment
-
Yes, both GCN and PointNet refer to what we call message passing layers now, meaning that both take the neighborhood of features, aggregate them in a permutation-invariant fashion to update the central nodes. In the GCN case, you can actually formulate this as a sparse matrix multiplication. In the PointNet layer, message passing is more complex, since we are utilizing MLPs to transform features of neighbors depending on the central node representation. So, both can be categorizes as message passing layers, but their implementation of message definition and how to aggregate differs. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am trying to understand if there is any relation between the message passing of GCN and the Pointnet Layer.
Till now, I get that the aggregation function is different as GCN has 'add' and Pointnet has 'max' in the message passing function.
But for GCN, I understand that we used to multiply the adjacency matrix, weights, and feature matrix to find the aggregation results and then use ReLU for the update.
So, in Pointnet, how does this aggregation and update take place? And
What is the similarity and difference wrt GCN?
Which part of the code executes input transform and feature transform using T-Net?
Beta Was this translation helpful? Give feedback.
All reactions