You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been trying to use the nnconv on qm9 implementation from the repo for graph embeddings on QM9. However, when I train the model on some tasks (for example target 6 which is zpve), the training is extremely unstable - the loss on train seems to converge ok, but MAE on validation and especially on test can sometime grows to an insane value (1e10 and more). It can also return to be ok for like the very next epoch. I even tried to train with the vanilla implementation here without any modifiying, but this ain't stop. I have no idea how to solve it and if it actually suppose to be this way and im tripping over this.
Thx for help in advance!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I have been trying to use the nnconv on qm9 implementation from the repo for graph embeddings on QM9. However, when I train the model on some tasks (for example target 6 which is zpve), the training is extremely unstable - the loss on train seems to converge ok, but MAE on validation and especially on test can sometime grows to an insane value (1e10 and more). It can also return to be ok for like the very next epoch. I even tried to train with the vanilla implementation here without any modifiying, but this ain't stop. I have no idea how to solve it and if it actually suppose to be this way and im tripping over this.
Thx for help in advance!
Beta Was this translation helpful? Give feedback.
All reactions