[GraphSage]: Is the size always the same between number of layers in SAGE and len(size) in NeighborSampler? #3799
Unanswered
udothemath
asked this question in
Q&A
Replies: 1 comment 9 replies
-
You are right. The number of hops ( A node in the computation graph actually refers to a set of neuron since each node is described by a set of features. For a GNN with input, hidden and output channels equals to 1, a circle describes both a node and a single neuron. Hope this clarifies your issues. |
Beta Was this translation helpful? Give feedback.
9 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
There are three SAGEConv examples provided by pytorch geometric. I am wondering is the size of number of neighbors in dataloader always the same as number of SAGEConv layer?
For example 1: reddit, num_neighbors=[25, 10] in Neighborloader, and there are two SAGEConv in SAGE(torch.nn.Module) class.


Similar setting for example 2 and 3.
I can understand the SAGEConv concept of sampling from the node's neighbor, but I don't understand how this idea is implemented in neural network.
In graph, each circle means a node.

In neural network, each circle means a neuron.

How to put these two together? Is there any good reference/visualization to explain the concept? Thanks for your help.
--
For example 2: ogbn_products_sage., sizes=[15, 10, 5] in NeighborSampler, and there are three SAGEConv in SAGE model (num_layers=3).
For example 3: graph_sage_unsup, sizes=[10, 10] in NeighborSampler, and num_layers=2 in SAGE model.
Beta Was this translation helpful? Give feedback.
All reactions