Replies: 1 comment 2 replies
-
It seems that your class EdgeConv(MessagePassing):
def __init__(self, F_in, F_out):
super(EdgeConv, self).__init__(aggr='max') # "Max" aggregation.
self.mlp = Seq(Lin(2 * F_in, F_out), ReLU(), Lin(F_out, F_out))
def forward(self, x, edge_index, edge_weight):
# x has shape [N, F_in]
# edge_index has shape [2, E]
return self.propagate(edge_index, x=x, edge_weight=edge_weight) # shape [N, F_out]
def message(self, x_i, x_j, edge_weight):
# x_i has shape [E, F_in]
# x_j has shape [E, F_in]
edge_features = edge_weight * torch.cat([x_i, x_j - x_i], dim=1) # shape [E, 2 * F_in]
return self.mlp(edge_features) # shape [E, F_out] |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'm implementing a form of EdgeConv for a physics application that requires me to pass in a feature of each neighboring node. If each node j has a feature a_j, then for node i, I will aggregate over my neighbors as x_i' = sum_j a_j * h(x_i, x_j) I scale each pairwise activation by the neighbor nodes feature. I also don't pass the a_j feature to the layers.
Would I be able to make a distinction between the features for learning and the a_j features with Pytorch's Dataloaders? Or should I try to store these a_j separately?
Beta Was this translation helpful? Give feedback.
All reactions