-
I have a small 21 node graph, for now, which I would eventually extend to about 1000 nodes, each node has 3 features and each edge has 1 attributes. I am trying to use a type of GNN which can include both node and edge features, and I am thinking of using GENConv following the example: ogbn_proteins_deepgcn.py . Following is my graph: f1 = np.random.randint(10, size=(21))
f2 = np.random.randint(10,20, size=(21))
f3 = np.random.randint(20,30, size=(21))
f_final = np.stack((f1,f2,f3), axis=1)
capital = 2*f1 + f2 - f3
f1_t = torch.from_numpy(f1)
f2_t = torch.from_numpy(f2)
f3_t = torch.from_numpy(f3)
capital_t = torch.from_numpy(capital)
capital_t = capital_t.type(torch.FloatTensor)
x = torch.from_numpy(f_final)
x = x.type(torch.FloatTensor)
edge_index = edge_index = torch.tensor([[0,1],[1,0],[0,5],[5,0],[0,4],[4,0],[1, 2],
[2, 1],[1,3],[3,1],[1,4],[4,1],[2,7],[7,2],[2,6],[6,2],[2,3],[3,2],[3,6],[6,3],[3,5],[5,3],[4,5],[5,4],
[5,9],[9,5],[6,10],[10,6],[6,8],[8,6],[6,7],[7,6],[6,11],[11,6],[7,12],[12,7],[3,8],[8,3],[8,9],[9,8],
[8,10],[10,8],[8,14],[14,8],[9,13],[13,9],[9,10],[10,9],[10,11],[11,10],[12,4],[4,12],[12,14],[14,12],
[12,13],[13,12],[9,13],[13,9],[13,5],[5,13],[14,4],[4,14],[1,15],[15,1],[4,15],[15,4],
[3,15],[15,3],[2,16],[16,2],[5,16],[16,5],[13,16],[16,13],[17,9],[9,17],[17,12],[12,17],
[14,17],[17,14],[2,18],[18,2],[7,18],[18,7],[11,18],[18,11],[19,11],[11,19],[19,12],[12,19],
[19,10],[10,19],[20,4],[4,20],[20,5],[5,20],[13,20],[20,13]], dtype=torch.long)
#edge_attr = torch.tensor([[5],[-5],[2],[-2],[3],[-3],[6],[-6],[3],[-3],[1],[-1],[4],[-4],[2],[-2],[7],
## [-7],[11],[-11],[12],[-12],[4],[-4],[3],[-3],[2],[-2],[5],[-5],[1],[-1],[2],[-2],[3],[-3],], dtype=torch.float)
edge_attr = torch.tensor([[1],[4],[3],[5],[2],[3],[7],[1],[3],[6],[1],[3],[2],[4],[2],[7],[6],[5],
[11],[7],[3],[7],[3],[4],[12],[2],[4],[5],[3],[1],[2],[3],[4],[7],[6],[15],[3],[5],[4],
[4],[6],[3],[5],[7],[7],[6],[9],[1],[7]],dtype=torch.float)
data = Data(x = x, edge_index = edge_index.t().contiguous(), y = capital_t, edge_attr=edge_attr )
print(data) The example uses BCEWithLogitsLoss as the loss function. I am wondering if that is a good loss function for my problem, since I am doing a node regression and the capital_t in the above is my y variable. Usually in regressions, MSELoss is used. Also what would be a good number of layers to use, since this is not a very big graph I am assuming it doesn't need to be very deep. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 17 replies
-
For regression, it's best to use a MSELoss. The best number of layers is hard to tell. I personally would run a quick hyperparameter search from 1 to 4 layers. |
Beta Was this translation helpful? Give feedback.
For regression, it's best to use a MSELoss. The best number of layers is hard to tell. I personally would run a quick hyperparameter search from 1 to 4 layers.