Skip to content
Discussion options

You must be logged in to vote

The way I understand how batching in PyTorch Geometric works is that each batch consists of a big graph that is formed by putting the individual graphs together. This is what you observed when you are writing that the effective batch size is 1.

In particular, the adjacency matrices of the individual matrices are put together to form a big block-diagonal matrix. This means that their original structure remains unaltered.

The reason why implementing batching in this fashion is smart is that for message-passing graph neural networks, the individual graphs which combined form the batch, do not affect one another since they are not connected by any edges.

All of this is explained nicely in the t…

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@101AlexMartin
Comment options

@rusty1s
Comment options

Answer selected by 101AlexMartin
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants