Replies: 2 comments
-
Mh, I see. This is a bit non-trivial to fix. One workaround would be to just wrap class MyGCN(torch.nn.Module):
def __init__(self):
self.gcn = GCN(...)
def forward(self, x, edge_index, edge_weight):
return self.gcn(x, edge_index, edge_weight=edge_weight) I need to think of better ways to allow this though. |
Beta Was this translation helpful? Give feedback.
0 replies
-
I added support for this via #7648. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I am using GCN as the encoder in DeepGraphInfomax. The forward function in DeepGraphInfomax is as follows:
Ideally, I pass x, edge_index, and edge_weight to self.encoder and get pos_z, then corrupt the data (here I just shuffle the nodes, to be specific), and pass corrupted_x with the same edge_index and edge_weight to self.encoder again and get neg_z.
The problem is, in the argument list of BasicGNN.forward():
It seems that the '*' terminates the positional arguments. But the corrupted data obtained from self.corruption is packed as a tuple. So the unpacked edge_weight in the argument list, as a third positional argument, will later trigger an error:
I thought about using keyword arguments, but it seems that the 'cor' would be a tuple anyway...
Does somebody know how to solve this problem? Thanks in advance!
Best,
Zinuo
Beta Was this translation helpful? Give feedback.
All reactions