Skip to content
Discussion options

You must be logged in to vote

Your total loss is not correctly calculated. It should be

total_loss += loss.item() * data.num_nodes
total_examples += data.num_nodes

return total_loss / total_examples

in case your graphs are differently sized. This fixes one of the issues for me.

The other issue is related to lambda_max. Since you are using None normalization, automatic lambda_max inferral may be different depending on the batch you have sampled since by default we compute it via 2 * edge_weight.max().

x = self._conv1(x, edge_index, edge_weight, batch, lambda_max=4)
x = self._relu(x)
x = self._conv2(x, edge_index, edge_weight, batch, lambda_max=4)

fixes this for me.

Test Loss: 0.00046314
Test Loss: 0.00046314
Test Loss: 

Replies: 1 comment 13 replies

Comment options

You must be logged in to vote
13 replies
@SomgBird
Comment options

@SomgBird
Comment options

@SomgBird
Comment options

@rusty1s
Comment options

Answer selected by SomgBird
@SomgBird
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants