GNN parameter optimization for whole graph classification task #2891
Unanswered
krzysztoffiok
asked this question in
Q&A
Replies: 1 comment 7 replies
-
What do you exactly mean with parameter optimization? Do you mean hyperparameter search? In general, a two-layer GNN with hidden feature sizes of 64, 128 or 256 work well in general, using a learning rate of 0.01 or 0.001. In addition, batch normalization and dropout after each GNN layer might be critical to achieve good performance/generalization. If you can share some more details, I'm happy to help. |
Beta Was this translation helpful? Give feedback.
7 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I'm working on whole graph classification, in particular I'm addressing fMRI-based classification with use of data from Human Connectome Project (HCP).
I've prepared a whole ML pipeline and was able to test it first on some open non-fMRI data sets to make sure it's all working, later I explored the task-based fMRI classification (easier task) and now I'm on the resting-state data. This is challenging because:
Best,
Chris
Beta Was this translation helpful? Give feedback.
All reactions