Replies: 1 comment 1 reply
-
This is indeed a tricky problem due to the permutation invariance requirement. In general, if your nodes do not change, i.e., you simply want to learn a new graph-structure, you can follow a similar procedure as in Auto-Encoders or Link prediction examples by just training against a different ground-truth. If you also want to learn/generate new nodes, I think you may need some more sophisticated techniques from recent graph generation proposals, see, e.g., here. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Wish to build encoder-decoder network which accepts graphs as input as well as outputs. Both can have different topology. Similar to Machine Translation, where input-language-sentence-graph can have different topology than input-language-sentence-graph.
Is there any Graph Neural Network architecture which can learn this input-to-output-graph transformation in end-to-end with supervised manner. Dataset having such input-output graphs pairs is available.
How to build such encoder decode network? Please note that as both, input and output are different, this can not be AutoEncoder.
More details on the original problem at: https://github.com/yogeshhk/MidcurveNN
Beta Was this translation helpful? Give feedback.
All reactions