Decoding Latent Variables into Edge Probabilities for a given Node Pairs #2161
Replies: 1 comment 6 replies
-
A GCN encoder followed by an InnperProductDecoder is used to learn node embeddings on the link prediction task as described in this paper. This is built to predict a single link between nodes. If my understanding is correct in your task there are multiple kinds of links between nodes. For that i guess you could build a separate encoder decoder model for each kind of link. Or you could have one encoder which feeds into multiple parametrized decoders. If your goal is to predict an edge weight, instead of presence or absence of edges, you'd have to change both operations in the decoder function. Also the sum term will always be present. It's present to sum up the dimensions after element wise multiplication of the latent vector. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I refer to this implementation:
in this page: https://pytorch-geometric.readthedocs.io/en/latest/_modules/torch_geometric/nn/models/autoencoder.html#InnerProductDecoder
How would the decoding look like in case of a multi-dimensional edge weights? I mean, would there be need for sum() in:
value = (z[edge_index[0]] * z[edge_index[1]]).sum(dim=1)
or, we just multiply the latent variables alone?
Beta Was this translation helpful? Give feedback.
All reactions