Replies: 2 comments 1 reply
-
Sorry for late reply. Will take a look. |
Beta Was this translation helpful? Give feedback.
0 replies
-
I get the following output when running on CUDA:
Which PyG version are you using? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Working on a University project I have been looking at this example file and was wondering about the different behavior using "cuda" vs "cpu" as
device
. ChatGPT tells me it is due to the lazy initialization but let me elaborate on what I am talking about.Adding print statements like this:
(shorten output with dots for readability)
With CPU I observed following output:
Before initialization: tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
. . .
336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348])
After initialization: tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
. . .
336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348])
While the cuda usage induces the following:
Before initialization: tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
. . .
336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348],
device='cuda:0')
After initialization: tensor([0], device='cuda:0')
And later on during that loop evaluating
loss
andval_acc
:CPU delivers reasonable decreasing loss and a val_acc < 1.0,
while cuda has a loss of 0 over all epochs and a val_acc of 1.0.
Well I am not sure if I am missing something but to me this should not be the case. The results of cuda obviously being induced by the GNN always predicting label 0 which is always correct in the cuda case.
I hope this is a reasonable question and description. Feedback and help are very appreciated. :)
Beta Was this translation helpful? Give feedback.
All reactions