Skip to content
Discussion options

You must be logged in to vote

Even with automatic differentiation, there are still practically two renderings:

  • The first is typically used to compute the loss (this is the eval() path in the RenderOp).
  • The second one happens once you call dr.backward(loss) (it will trigger the backward() path in the RenderOp). This one will use automatic differentiation on its output but it must still be de-correlated from the primal.

Does that make sense?

Replies: 2 comments 1 reply

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@njroussel
Comment options

Answer selected by gerwang
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants