Loss not moving on large graphs #3648
Unanswered
davidireland3
asked this question in
Q&A
Replies: 1 comment 3 replies
-
From a GNN perspective, the size of the graph does not really matter. Embeddings produced by |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying out a regression/classification type task on some subgraphs. The problem is essentially: given a score for a subgraph, try and predict it (for classification I use the scores to make classes, e.g. if score is in a certain range then assign to this class). The subgraphs are randomly sampled from a main graph and the score is calculated for each. My GNN is a 3 layer network where each layer first performs a SAGE convolution with ReLU activation followed by a k-pooling layer. My code has worked well on all the graphs I have tested so far, but now when I move onto the largest graph I am interested in, the loss barely moves.
I am confused by this as the subgraphs for this largest graph are not magnitudes bigger than the next largest graphs subgraphs, the avg number of nodes per subgraph in former is ~77k and edges ~200k, whilst the latter had an avg of ~23k nodes and ~90k edges. As I said, my code and training worked fine for the latter, but when testing on the larger graph the loss barely moves at all.
I have tried adjusting the number of hidden units, adding/removing a layer and playing with the learning rate (the loss seems to adjust more with a higher LR, the highest and most successful I've tried is 1). I've tried Adam and SGD for the optimisers, but I am now running out of ideas as to what could be causing this. The subgraph generation procedure and score assignment has not changed, so I am starting to be at a loss as to what is causing this issue.
has anyone else had any issues with this when dealing with large graphs? it seems like the only thing that has changed is the size of the graphs being trained with, so I am wondering if that is causing it...
if anyone has any suggestions on what to try then I'd be very grateful!
Beta Was this translation helpful? Give feedback.
All reactions