The code on torch-geometric 1.5.0 fails in 2.1.0 #5565
Unanswered
WeixuanXiong
asked this question in
Q&A
Replies: 1 comment 1 reply
-
You can simply try return (x_j * alpha.view(-1, self.heads, 1)).squeeze() |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
The false is as followed:
Traceback (most recent call last):
File "model/graph_learning.py", line 57, in
out, adj = model(x)
File "/dfs/data/miniconda/envs/anod/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "model/graph_learning.py", line 49, in forward
gcn_out = gnn_layers(x, batch_all_edge_index, node_num=node_num * batch_num, embedding=all_embeddings)
File "/dfs/data/miniconda/envs/anod/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/dfs/data/Anomaly-Transformer/model/graph_layer.py", line 146, in forward
out, (new_edge_index, att_weight) = self.gnn(x, edge_index, embedding, return_attention_weights=True)
File "/dfs/data/miniconda/envs/anod/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/dfs/data/Anomaly-Transformer/model/graph_layer.py", line 66, in forward
return_attention_weights=return_attention_weights)
File "/dfs/data/miniconda/envs/anod/lib/python3.7/site-packages/torch_geometric/nn/conv/message_passing.py", line 391, in propagate
out = self.aggregate(out, **aggr_kwargs)
File "/dfs/data/miniconda/envs/anod/lib/python3.7/site-packages/torch_geometric/nn/conv/message_passing.py", line 515, in aggregate
dim=self.node_dim)
File "/dfs/data/miniconda/envs/anod/lib/python3.7/site-packages/torch_geometric/nn/aggr/base.py", line 114, in call
return super().call(x, index, ptr, dim_size, dim, **kwargs)
File "/dfs/data/miniconda/envs/anod/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/dfs/data/miniconda/envs/anod/lib/python3.7/site-packages/torch_geometric/nn/aggr/basic.py", line 21, in forward
return self.reduce(x, index, ptr, dim_size, dim, reduce='sum')
File "/dfs/data/miniconda/envs/anod/lib/python3.7/site-packages/torch_geometric/nn/aggr/base.py", line 153, in reduce
return scatter(x, index, dim=dim, dim_size=dim_size, reduce=reduce)
File "/dfs/data/miniconda/envs/anod/lib/python3.7/site-packages/torch_geometric/utils/scatter.py", line 64, in scatter
return torch_scatter.scatter(src, index, dim, out, dim_size, reduce)
File "/dfs/data/miniconda/envs/anod/lib/python3.7/site-packages/torch_scatter/scatter.py", line 152, in scatter
return scatter_sum(src, index, dim, out, dim_size)
File "/dfs/data/miniconda/envs/anod/lib/python3.7/site-packages/torch_scatter/scatter.py", line 11, in scatter_sum
index = broadcast(index, src, dim)
File "/dfs/data/miniconda/envs/anod/lib/python3.7/site-packages/torch_scatter/utils.py", line 12, in broadcast
src = src.expand(other.size())
RuntimeError: The expanded size of the tensor (1) must match the existing size (160000) at non-singleton dimension 1. Target sizes: [160000, 1, 64]. Tensor sizes: [1, 160000, 1]
The MessagePassing class is:
class GraphLayer(MessagePassing):
def init(self, in_channels, out_channels, heads=1, concat=True,
negative_slope=0.2, dropout=0, bias=True, inter_dim=-1,**kwargs):
super(GraphLayer, self).init(aggr='add', **kwargs)
The result of last two prints are: (16000, 1, 64) and (16000, 1, 1).
This class works well on torch 1.5.0 with torch-geometric 1.5.0 but fails on torch 1.12.0 with torch-geometric 2.1.0. I'm wondering how can i get rid of this.
Beta Was this translation helpful? Give feedback.
All reactions