-
Suppose that I have ~20,000 big sized graphs, each graph has ~1,500,000 nodes. In this case, what would you try more on this to dramatically decrease the computing time? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 6 replies
-
This is a very challenging problem. In general, you should always use graph sampling to speed up conversion, but currently all our samplers (like |
Beta Was this translation helpful? Give feedback.
This is a very challenging problem. In general, you should always use graph sampling to speed up conversion, but currently all our samplers (like
NeighborLoader
) assume a single graph. One alternative way is to pre-process your graphs into smaller subgraphs beforehand, and then try to autoencode this subgraphs instead.