Gradients on edge in sparse mode #2590
Unanswered
kavehhassani
asked this question in
Q&A
Replies: 2 comments 2 replies
-
While you cannot propagate gradients through edge_sample = RelaxedBernoulliStraightThrough(temperature, probs=edge_prob).rsample().squeeze().bool()
batch.edge_index = p_edges[:, edge_sample]
batch.edge_weight = edge_prob[edge_sample]
conv(batch.x, batch.edge_index, batch.edge_weight) |
Beta Was this translation helpful? Give feedback.
1 reply
-
Ok, I tried using the |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am trying to implement a GNN that takes in a graph and learns to add or drop edges w.r.t a down-stream task. I am using pyro for differentiable sampling. Now the problem is that when I implement it in a sparse mode, the gradients wouldn't pass the edge_index as it is a tensor of indices.
`class EdgeAugment(nn.Module):
def init(self, in_dim, hid_dim, layers):
super(EdgeAugment, self).init()
self.encoder = GNN(in_dim, hid_dim, layers)
self.head = nn.Sequential(
nn.BatchNorm1d(2 * hid_dim + 1),
nn.Linear(2 * hid_dim + 1, 1),
nn.Sigmoid()
)
For this purpose, I reused the dense GNN layers which well works perfectly:
`
class EdgeAugment(nn.Module):
def init(self, in_dim, hid_dim, layers):
super(EdgeAugment, self).init()
self.encoder = GNN(in_dim, hid_dim, layers)
self.head = nn.Sequential(
nn.Linear(2 * hid_dim + 1, hid_dim),
nn.PReLU(),
nn.Linear(hid_dim, 1),
nn.Sigmoid()
)
`
I was wondering if there is a way to get this work in sparse mode as the dense mode is super inefficient?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions