Perplexed by the indeterminacy of the Model Using GIN as an Encoder with GPU #8266
Unanswered
ZJUDataIntelligence
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Yes, atomic operations are non-deterministic in the sense that the order of aggregation is undefined, which may lead to slightly different results to how floating point precision works. This is indeed resolved by the usage of |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Recently, I have encountered some perplexing issues while using GIN as an encoder. I have found that I cannot reproduce previous results, even when using the same data, parameters, and the same random seed. I have tried various methods to address this problem, including checking the code and environment settings, but have not found any apparent reasons. This irreproducibility has left me feeling frustrated as I cannot comprehend why the same model produces different results across different runs.
Later, I read the documentation of PYG and came across a mention of "Memory-Efficient Aggregations" where it states, "As an additional advantage, MessagePassing implementations that utilize the SparseTensor class are deterministic on the GPU since aggregations no longer rely on atomic operations." Does this uncertainty arise from atomic operations? However, I also attempted to run the SparseTensor version of GIN and found that the results still varied, which has further deepened my confusion.
Beta Was this translation helpful? Give feedback.
All reactions