Obtaining Embeddings in PYG.KGE and inverting to recover original input data #9065
-
Hello, As I understand in pytorch geometric models such as RotatE or TransE we can obtain an embedded triplet for some input triplet once we've trained our model via node_emb or node_emb_im in the case of RotatE. How can we recover the original triplet if we have the embedded triplet? If we can't guarantee the reverse process since we are mapping to a latent space with fewer dimensions than the input dimensions, how can we get the most likely triplet that matches our embedded triplet? Looking at the source code for To ground the question in a more detailed manner: I now have six objects: model.node_emb(head_node_index), model.node_emb_im(head_node_index), model.node_emb(tail_node_index), model.node_emb_im(tail_node_index), model.rel_emb(unknown_index), model.rel_emb_im(unknown_index) Then I plug these objects as input to some other model which yields an answer for the link prediction in the embedding space. I now wish to map my six objects to the input dimensions so I can have a meaningful answer. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Sorry for super late reply :( Is there a point in a |
Beta Was this translation helpful? Give feedback.
Sorry for super late reply :( Is there a point in a
decode
function if you already nodehead_node_index
,tail_node_index
andunknown_index
(since you receive the embedding from it)? Why do you need to recover these indices from the embedding itself? Otherwise, I would assume you can simply search for the index innode_emb
that matches withmodel.node_emb(head_node_index)
?