-
I've been thinking about the correct way to use Another option would be to declare a separate embedding matrix and use this as features in the GCN: How obviously both of these cause a large increase in the number of model parameters. Is there some other way to use GCNConv with no input feature which doesn't cause such a large increase in parameters. If not, what would be the best way to combat this? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
I don't think there is a clear recommended way to do this. I think the But you are right. These approaches will create a huge number of parameters and only work in transductive learning scenarios. IMO, there are two options to limit the number of parameters:
emb = ... # Parameter of shape [num_bases, in_channels]
alpha = ... # Parameter of shape [num_nodes, num_bases)
x = (alpha.unsqueeze(-1) * emb.unsqueeze(0)).view(num_nodes, in_channels) |
Beta Was this translation helpful? Give feedback.
I don't think there is a clear recommended way to do this. I think the
Embedding
approach is better than usingtorch.eye
since it scales to larger graphs via scalability techniques such as NeighborSampler as well, and avoids the need of creating a sparse diagonal input feature matrix.But you are right. These approaches will create a huge number of parameters and only work in transductive learning scenarios. IMO, there are two options to limit the number of parameters:
torch_geometric.transforms.OneHotDegree
). This is commonly done in graph classification and works well there.