Replies: 2 comments 2 replies
-
If you want to have distinct weights for nodes/edges, one idea is to maintain a def forward(x, edge_index):
self.propagate(edge_index)
def message(self):
return self.embedding.weight |
Beta Was this translation helpful? Give feedback.
-
Is this limited to 1D? In other words, the projection itself is a deep net, not just an embedding (which would be one slice/layer of the unet) and the nodes can have different shapes (different number of channels) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, I'm new to
PyG
and I'm trying to port my implementation of NGC to this library.One of the main ideas that differs from traditional GNN-like approaches is that nodes are image representations (so
node1=RGB
,node2=Semantic
,node3=Depth
etc.), not individual low-level components (i.e. pixels in a 2D grid). Edges are mappings between two concepts and they represent 'transformations' between tasks using neural nets (or any projection, even a simple fully connected layer).The way I think about classic GCN approaches is an extension of
conv2d
, so the weights are shared across the image:img2=conv2d(img1)
~=g2=gcn(g1)
.In my case, however, I need to have unique weights for each edge, as each edge is an independent u-net like neural network.
Can this be expressed in PyG in any way?
Aggregation can be simple averaging (of neural network outputs) and update can be just the result of the averaging at each node.
By averaging, I mean that each node (say semantic) will receive 2 message tensors of shape (
B,W,H,C
) fromRGB->Semantic
andDepth->Semantic
u-net networks, so it's a tensor of shape (B,V,W,H,C
-> agg ->B,W,H,C
) with aggtorch.mean(msg, dim=1)
. This is the new state of nodeSemantic
(initially it was empty/zeros as it's a predicted node)Beta Was this translation helpful? Give feedback.
All reactions