-
Hello, It is my understanding that A possible solution to that could be to run import torch_geometric as pyg
from torch_geometric.nn import MessagePassing
from torch_geometric.utils import trim_to_layer, add_remaining_self_loops
class MyConv(MessagePassing):
def __init__(self, other_arguments):
# layer initialization code
def forward(self, x, edge_index, num_sampled_nodes, num_sampled_edges, layer):
trimmed_x, trimmed_edge_index, _ = trim_to_layer(layer = layer,
num_sampled_nodes_per_hop = num_sampled_nodes, num_sampled_edges_per_hop = num_sampled_edges, x=x, edge_index = edge_index)
trimmed_edge_index_with_self_loops, _= add_remaining_self_loops(trimmed_edge_index)
# compute embeddings and return
To compute a forward pass, the arguments would be passed to the layer's neighbor_loader = NeighborLoader( #...
batch = next(iter(neighbor_loader)
predictions = model(batch.x, batch.edge_index, batch.num_sampled_nodes, batch.num_sampled_edges) But the call to edge_index = torch.tensor([[0],
[1]]) # 1 appears only as destination
add_remaining_self_loops(edge_index) # tensor([[0, 0, 1], [1, 0, 1]]) Inverting the execution order of Another way I tried was to define the loader as: import torch_geometric.transforms as T
neighbor_loader = NeighborLoader(data,
num_neighbors = [ 25, 10],
batch_size = 512,
shuffle = True,
input_nodes = data.train_mask,
transform = T.Compose([T.AddSelfLoops(), T.RemoveDuplicatedEdges()])) But the So I'd have the following two questions:
Thank you in advance. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
I think that's a great issue and something we haven't really thought to deeply about yet. One easy way to fix this would be to remove self-loops completely from your graph before inputting it into The better fix would be to deeply integrate it into our |
Beta Was this translation helpful? Give feedback.
I think that's a great issue and something we haven't really thought to deeply about yet. One easy way to fix this would be to remove self-loops completely from your graph before inputting it into
NeighborLoader
, and then applyadd_self_loops
after each trimming stage.add_self_loops
is a very fast op, and you shouldn't see any decrease in performance even if you apply it before each layer.The better fix would be to deeply integrate it into our
NeighborLoader
C++ code, but we don't have any urgent plan to integrate this right now.