Learning more-resolute adjacency definitions #5770
Unanswered
MichaelHopwood
asked this question in
Q&A
Replies: 1 comment
-
Attention based GNNs could alleviate your problem of over smoothing to an extent. TransformerConv, GATConv etc. I am not familiar with papers about deriving an adjacency matrix, maybe someone else can help there. But you could checkout DynamicEdgeConv where the graph is dynamically constructed. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have a dense graph which ends up over-aggregating when using normal GCNs in inference; specifically, the output embeddings of many nodes (~90%) are the global average. I blame this on the adjacency definition, and will need to work on that piece separately.
I was wondering if anyone knows of some papers or procedures which derive a more-optimal adjacency definition$A^* = f(A,X)$ given also information of the node features. Additionally, could potentially use node labels (although it'd have to be semi-supervised in my case) $A^* = f(A,X,y)$ .
Beta Was this translation helpful? Give feedback.
All reactions