Replies: 1 comment 21 replies
-
Is your goal to use Or do you want to apply a linear transform on the features before passing the graph to a GNN? |
Beta Was this translation helpful? Give feedback.
21 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I have some PyTorch geometric graphs, with a varying number of nodes and edges. For nodes, I have 4 features and for edges, I have 1 feature.
I was thinking of creating an embedding layer before my next layer in my architecture, like the code below:
`
The problem is that I can't figure out what n_input_dim and e_input_dim should I have.
I was thinking that I should have for node embeddings for example: [#nodes_in_each_batchgraph, hidden_dim], and because these graphs are of a varying number of nodes I thought that n_input_dim=max_num_nodes*batch_size.
But this approach outputs the following error:
site-packages/torch/nn/functional.py", line 2044, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
So I suppose that this is because it tries to find the embeddings in the y-direction (features), not in the x-direction (num of nodes).
Could you please help me with some ideas of what's going on and how should continue?
Beta Was this translation helpful? Give feedback.
All reactions