what are the things to consider while working on a link prediction model #5367
-
Hi, i am working on link prediction problem and i am using one of the examples as reference
As this is for a single graph is it correct a use a torch dataloader to serailize and train the same network batchwise by summing the losses and backpropagating them for different graphs. A graph is built from a image hence it cannot be passed all together as a single graph as there is positional info per graph. Q2. As there is also a provision for passing the edge weights to gcn layer what should be the optimal range of these wights should they be beetween 0-1? Q3. hwill increasing no.of layers improve the model performance. Q4. for the decode part is it right to use a mlp and train the pairs as a binaray classifier instead of the pairwise product? Any suggestion will be greatly appreciated as i have done the same and i am unable to bring down the false positives considerably. @rusty1s pls help |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @knitemblazor maybe I could answer some of your questions: For Q2: The edge weight does not have to lie between 0 and 1. Actually, it can be any meaningful value but in most cases lying in the range of [0,1] would be better. For Q3: In literature, the best performance of GCN is achieved when the no. of layers is 2. Increasing it would typically lead to an oversmoothing issue. For the oversmoothing issue, you can refer to this blog For Q4: You are right. Using MLP as a decoder usually could achieve better performance than a simple pairwise product. |
Beta Was this translation helpful? Give feedback.
Hi @knitemblazor maybe I could answer some of your questions:
For Q2: The edge weight does not have to lie between 0 and 1. Actually, it can be any meaningful value but in most cases lying in the range of [0,1] would be better.
For Q3: In literature, the best performance of GCN is achieved when the no. of layers is 2. Increasing it would typically lead to an oversmoothing issue. For the oversmoothing issue, you can refer to this blog
For Q4: You are right. Using MLP as a decoder usually could achieve better performance than a simple pairwise product.