Gradient Penalty and AMP #12687
Unanswered
FeryET
asked this question in
code help: CV
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to implement a gradient penalty for a GAN implementation. From looking at Pytorch's documentation, it seems that the gradients should be computed on scaled inputs.
Should I implement a completely manual loop for this to be implemented, or can I implement it in the training_step with
manual_backward
. How can I access the gradient scaler to scale the input tensors.Thanks!
Beta Was this translation helpful? Give feedback.
All reactions