Replies: 1 comment 3 replies
-
|
Beta Was this translation helpful? Give feedback.
3 replies
Answer selected by
jasperhyp
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I became a little bit confused when thinking about whether to update initial node embeddings or not in a RecSys task. So in shallow embedding-based models, e.g. matrix factorization, the shallow embeddings have to be updated. This is the same for
LightGCN
, where the only learnable weights are the initial embeddings. But in most scenarios, node features are not updated.Concretely, my questions are:
(1) In what cases would we not want to set the initial node features back to the original in every iteration? I am sure generative models would usually require those
x
's to be updated, but unsure about others.(2) In those cases where initial features are updated in each iteration (say,
LightGCN
), if I have some manually curated features for each node but not many (say, a couple of properties), I assume it is better to randomly initialize and then concatenate the small feature vector, forming a large "initial vector". In this case, is it common practice that I set the small feature vector to no_grad (but leave the randomly initialized part updatable)? Or perhaps, there could be some sort of embedding layer (probably not linear layers, because input dim is too small) similar to NLP tasks, where words are first embedded to a vector, and then only the projections are updated...?Beta Was this translation helpful? Give feedback.
All reactions