Skip to content
Discussion options

You must be logged in to vote
  1. Yes, currently train_test_split_edges assumes an undirected graph represented by edge_index. During splitting, we then make sure that both edges are contained in the same split. If you simply want to split your directed edges, I guess you can simple split edge_index based on a random permutation:
perm = torch.randperm(edge_index.size(1))
pos_train_edge_index = edge_index[:, perm[:1000]]
...
  1. Yes, this is indeed the case. It might well happen that positive validation edges and test edges appear as negative samples during training. We are not allowed to acknowledge their existence during training to prevent any data leakage. Although this seems to be counter-intuitive at first glance, it …

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@sbonner0
Comment options

Answer selected by sbonner0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants