Replies: 1 comment 1 reply
-
Yes, you can put the adj_t = adj_t.set_value(self.edge_weight)
out = conv(x, adj_t) |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I'm a beginner in PYG and I have some questions about GCNConv.
In my settings, I need to set GCN's edge_weight to be learnable. At the same time, in order to ensure reproducibility, I need to set
In order to ensure reproducibility, I referred to issue 3175, which converts the input edge_index of GCN into a SparseTensor. In order to ensure the edge_weight is learnable, I referred to issue 2033, i.e.,
However, I found that when edge_index is a SparseTensor, the self.edge_weight will not be used (self.edge_weight will not be updated). This is mainly because of the function gcn_norm() in the GCN source code, it only returns adj_t when edge_index is a SparseTensor!
Question:
So is there any way to ensure that edge_weight is learnable and the code is reproducible (i.e., the algorithm is determined) at the same time?
Beta Was this translation helpful? Give feedback.
All reactions