Best way to train Node classification on WebKB datasets #6080
-
Hi @EdisonLeeeee @rusty1s I am trying to train a simple GCN on the WebKB dataset for node classification - INFO:root:Dataset: cornell()
Dataset: cornell():
====================
Number of graphs: 1
Number of features: 1703
Number of classes: 5
Data(x=[183, 1703], edge_index=[2, 298], y=[183], train_mask=[183, 10], val_mask=[183, 10], test_mask=[183, 10])
=============================================================
Number of nodes: 183
Number of edges: 298
Average node degree: 1.63
Has isolated nodes: False
Has self-loops: True
Is undirected: False Since it has 10 random splits, I am currently optimising my hyper-parameters on the val_mask, that is, for each random split, take average over 5 random seeds run for 100 epochs and I later use test_mask to report the test accuracy. Is this the right way to go about for training them? Or even better what would be the best way to conduct experiments on these datasets? Any suggestions? Thank you for your time! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 13 replies
-
You might refer to Geom-GCN: Geometric Graph Convolutional Networks and the official code, where authors report the average performance of all models on the test sets over 10 random splits with the same hyper-parameter setting. |
Beta Was this translation helpful? Give feedback.
You might refer to Geom-GCN: Geometric Graph Convolutional Networks and the official code, where authors report the average performance of all models on the test sets over 10 random splits with the same hyper-parameter setting.