Replies: 1 comment
-
|
I think combining this scheme with |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I am trying to reproduce the results of the paper Provably Powerful Graph Neural Networks for Directed Multigraphs. In this paper, it proposed the following update scheme which uses reverse message passing in the directed graph settings:
The codes provided by the author can be found here.
My question is about forward compuataional procedures of a Hetero-model that at each layer, the same type of nodes are updated by different edges. The author of this paper use
heterographto represent the directed (transaction) network. The reverse message passing is done by propogating through the edgesnode rev_to node. I notice that after defining a homogeneous GNN model, a heterogeneous model is created byThen, the output will be obtained by
where
batchcontains node featuresx, edgesnode to node, reversed edgesnode rev_to node, and edge attributes.From PyG's documentation, the forward computational graph of Hetero-model is unclear. To follow the update scheme proposed by the paper, the forward output
x_inusing only the incoming neighbourhood should be used to updatextogether withx_outusing only the outgoing neighbourhood after eachConvlayer. Therefore, I wonder whether PyG'sto_hetero(model, te_data.metadata(), aggr='mean')behaves as I expected, or instead, it only aggregatesx_inandx_outbymeanat the very end.Besides the computational graph of PyG's Hetero-model, I wonder whether using a
heterographto enable such a separated reverse message passing is a good way. Is it better to use homogenous settings and do reverse message passing manually like:Thanks for any help and ideas in advance!
Beta Was this translation helpful? Give feedback.
All reactions