-
Hello, I'd like to modify intermediate gradient, and then recalculate parameter's gradient value. For example, in in gradient-based optimization tutorial, I was able to get the gradient value (dL/dI) of the image using backward(loss, dr.ADFlag.ClearEdges). After modifying the gradient value obtained here, I want to recalculate the parameter's gradient value using the modified dL/dI. (e.g., modified dL/dI * dI/d(opt[key]))
I knew that the gradient for the image could be modified using dr.set_grad(image, ...), but I couldn't find a way to recalculate using the modified gradient. Can anyone point me in the right direction here? Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
Hi @ryukang There are fine controls available in Dr.Jit for gradient propagation. In particular, you might want to have a look at loss = mse(image)
dr.backward_to(image)
dr.set_grad(image, modified_value)
dr.backward(image) The EDIT: loss = mse(image)
dr.backward_to(image) # Differentiate through loss function
dr.backward(image, dr.ADFlag.ClearInterior) # Propagate to parameters without destroying the graph edges
# Prepare new modified gradient
modified_value = some_function(dr.grad(image), dr.grad(opt[key]))
dr.set_grad(image, modified_value)
dr.backward(image) |
Beta Was this translation helpful? Give feedback.
Sorry, I responded too quickly.
You are correct.
dr.backward
anddr.forward
are just aliases for, respectively,dr.backward_from
anddr.forward_from
. Both functions set the variable's gradient to 1 before propagating.Here's my corrected suggestion: