-
Hi Matt. I'm trying to make a new graph autoencoder, where the inputs are graph dataset that has different number of nodes and edges. Some of graph data sample look like this: My graph autoencoder architecture looks like this : class autoencoder(torch.nn.Module):
def __init__(self, in_channel, hidden_channel, out_channel, fc1_hidden, x_lat,
depth, kernel_size, batch_size):
super(autoencoder, self).__init__()
self.in_channel = in_channel
self.hidden_channel = hidden_channel
self.out_channel = out_channel # = Latent_channel
self.kernel_size = kernel_size
self.batch_size = batch_size
self.depth = depth
self.fc1_hidden = fc1_hidden
self.x_lat = x_lat
self.encoder = Encoder(self.in_channel, self.hidden_channel, self.out_channel, self.fc1_hidden, self.x_lat,
self.depth, self.kernel_size, self.batch_size)
self.decoder = Decoder(self.in_channel, self.hidden_channel, self.out_channel, self.fc1_hidden, self.x_lat,
self.depth, self.kernel_size, self.batch_size)
def forward(self, data):
b_cluster = batched_cluster(data, self.batch_size, data.cluster, self.depth)
x, edge_indices, edge_attrs = self.encoder(data, b_cluster)
emb = x
x = self.decoder(x, edge_indices, edge_attrs, b_cluster)
return emb, x, edge_indices, edge_attrs
Current def batched_cluster(data, batch_size, clusters, depth):
batched_clusters = []
batch = data.batch
for i in range(depth):
cluster = torch.tensor(clusters[0][i])
max_clusters = int(cluster.max())
batch = torch.repeat_interleave(torch.arange(batch_size), cluster.size(0))
batched_cluster = torch.cat([cluster] * (batch_size)).to(device) + (batch * (max_clusters+1)).to(device)
batched_clusters.append(batched_cluster)
return batched_clusters However, this |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
I think this is still an open problem. Most GNN-based autoencoder implementations I am aware of need to store graph connectivity for decoding, e.g., https://arxiv.org/abs/1807.10267. More recent approaches utilize different representations of meshes, e.g., https://arxiv.org/abs/1901.05103. It might be a good idea to do some literature research on any new methods in this regard. |
Beta Was this translation helpful? Give feedback.
I think this is still an open problem. Most GNN-based autoencoder implementations I am aware of need to store graph connectivity for decoding, e.g., https://arxiv.org/abs/1807.10267. More recent approaches utilize different representations of meshes, e.g., https://arxiv.org/abs/1901.05103. It might be a good idea to do some literature research on any new methods in this regard.