You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the great library! I am very new to PyTorch Geometric and I'm trying to implement the hypergraph neural network architecture described here: https://arxiv.org/pdf/1911.12073v1.pdf
The input hypergraphs consist of:
C, S, T: three independent sets of nodes
E_ct: a set of binary edges from C x T
E_st: a set of 4-ary oriented labeled ({1,-1}) edges from S x T x T x T x {1,-1}
The message passing layers are described in section 2.2 of the above paper, but can be summarized as follows:
We define some helper sets to help clarify message propagation direction in the layers defined below:
The node updates at each time step are then defined by:
Where the aggregators are defined as follows: : The concatentation of max with mean i.e. [max, mean] : The concatentation of max - min with mean i.e. [max-min, mean]
Here is my attempt at implementing the C node updates:
Also, how would I go about implementing the S and T update layers? Both rely on hyperedge aggregations. I had a look at how HypergraphConv was implemented here: https://pytorch-geometric.readthedocs.io/en/latest/_modules/torch_geometric/nn/conv/hypergraph_conv.html#HypergraphConv
but I must admit, I didn't really understand how the message passing with the incidence matrix H worked. How would I translate this to my use case where I have multiple node sets and differing aggregation based on message flow direction?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
Thanks for the great library! I am very new to PyTorch Geometric and I'm trying to implement the hypergraph neural network architecture described here: https://arxiv.org/pdf/1911.12073v1.pdf
The input hypergraphs consist of:
The message passing layers are described in section 2.2 of the above paper, but can be summarized as follows:
We define some helper sets to help clarify message propagation direction in the layers defined below:
The node updates at each time step are then defined by:
Where the aggregators are defined as follows:
: The concatentation of max with mean i.e. [max, mean]
: The concatentation of max - min with mean i.e. [max-min, mean]
Here is my attempt at implementing the C node updates:
In the above, I am assuming x is given as a tuple (x_t, x_c) so that I can use the bipartite nature of this subgraph to only propogate messages from T to C, and not from C to T. This is inspired by:
https://pytorch-geometric.readthedocs.io/en/latest/notes/create_gnn.html and #1210.
Am I on the right track here?
Also, how would I go about implementing the S and T update layers? Both rely on hyperedge aggregations. I had a look at how HypergraphConv was implemented here: https://pytorch-geometric.readthedocs.io/en/latest/_modules/torch_geometric/nn/conv/hypergraph_conv.html#HypergraphConv
but I must admit, I didn't really understand how the message passing with the incidence matrix H worked. How would I translate this to my use case where I have multiple node sets and differing aggregation based on message flow direction?
Thanks a lot! :)
Beta Was this translation helpful? Give feedback.
All reactions