Replies: 1 comment 3 replies
-
|
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Dear altruist, I was going through the Monet implementation in PyG (https://pytorch-geometric.readthedocs.io/en/latest/_modules/torch_geometric/nn/conv/gmm_conv.html#GMMConv) powered by the following titled paper "Geometric deep learning on graphs and manifolds using mixture model cnns" where they used GMM density function as kernel or weighting function. I have two questions, one is regarding the optimization of the equation presented in the paper and another is about the formulation of that optimized equation in the PyG implementation.
I think I am ignorant about the above simplification process. The first line code generates a tensor of shape (E, K, D) and last line makes it (E, K). In the paper, it is mentioned that the mean is (d x 1) matrix and cov is a (d x d) matrix for each kernel. However, in the implementation the cov is (d x 1) matrix which can not be inversed. I did not get why the pow() functions have been used here in both cases, if I correctly follow the optimized expression. Could anyone please give me explanations of how these things are working here?
Beta Was this translation helpful? Give feedback.
All reactions