Skip to content
Discussion options

You must be logged in to vote

Yes, that does not make a real difference. The reason we do this here is due to the fact that it is more memory-friendly to perform the transformation node-wise instead of edge_wise, i.e.:

x = x @ self.weight  # [num_nodes, out_channels]
self.propagate(edge_index, x=x)

is faster than doing

self.propagate(edge_index, x=x)

def message(self, x_j):
    return x_j @ self.weight  # [num_edges, out_channels]

since usually, |E| >> |N|.

Replies: 1 comment 7 replies

Comment options

You must be logged in to vote
7 replies
@mdanb
Comment options

@rusty1s
Comment options

@mdanb
Comment options

@rusty1s
Comment options

@rusty1s
Comment options

Answer selected by mdanb
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants