-
Hello, I am new to graph networks and I am working on a graph classification problem. Each of my graphs are set of variable nodes, each node consists of one 32x32x1 image and 1x29 feature vector. What i would like to do is convert the image to a 1 dimensional feature vector by passing it through 2-3 conv2d layers and combine with the 1x29 feature vector by passing both through linear layer/directly concatenating etc. After this, I plan on attaching other standard layers like GATConv I would like to know how I can accomplish this. Do I create a custom message passing layer that does this in it's forward pass? What would my inputs in the init constructor be? Thank you |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
In general, you would first apply a CNN on your images, and then use the embedding produced by the CNN as input to your GNN. Given images of shape img = CNN(img)
img = img.view(num_nodes, -1)
x = torch.cat([img, feature_vector], dim=-1)
out = GNN(x, edge_index) Keep in mind that this will not scale well for large graphs. Currently, the model is trained jointly, that is each image for every node is processed together inside the CNN). An alternative is to use some pre-trained CNN, process the embeddings of nodes once, and use them afterwards as detached input to your GNN. |
Beta Was this translation helpful? Give feedback.
In general, you would first apply a CNN on your images, and then use the embedding produced by the CNN as input to your GNN. Given images of shape
[num_nodes, num_channels, width, height]
, you can do:Keep in mind that this will not scale well for large graphs. Currently, the model is trained jointly, that is each image for every node is processed together inside the CNN). An alternative is to use some pre-trained CNN, process the embeddings of nodes once, and use them afterwards as detached input to your GNN.