Replies: 2 comments 1 reply
-
Your code looks correct to me. What are the results you see? I briefly remember that you need to normalize your input features on Yelp to get better performance. |
Beta Was this translation helpful? Give feedback.
-
Hello Matthias, Really appreciate your reply. Actually I just used APPNP with sampling to make a simple test, here is my full code:
The accuracy is bad (less than 40%) even after 100 epochs, but when I commented out the APPNP, only MLP left, it seems the performance can achieve over 60%, which should be in a reasonable range. Is there anything wrong? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am running my code on Yelp which has multi-labels, I made some simple modifications on my original code which is for normal single label dataset:
(1)I use new loss function:
loss_op = torch.nn.BCEWithLogitsLoss()
loss = loss_op(out, y.float())
(2) I use F-1 instead of accuracy:
y = data.y.to(out.device)
pred = (out > 0).float()
accs = []
for mask in [data.train_mask, data.val_mask, data.test_mask]:
accs.append(f1_score(y[mask], pred[mask], average='micro') if pred[mask].sum() > 0 else 0)
But the results are very bad, I am wondering if I miss something to do experiments on yelp. Thanks for help.
Beta Was this translation helpful? Give feedback.
All reactions