Skip to content
Discussion options

You must be logged in to vote

Hi @rajeshsharma-ai

I'd recommend you carefully go through this tutorial and also learn a bit more about the Pytorch autograd layer. Unfortunately, this feature or Dr.Jit is fairly low-level and therefore requires some intuition about how the automatic differentiation works.

In short, it is only possible to mix two frameworks by capturing all of the computations of one of the frameworks inside a wrap_ad decorated function.
Simply calling drjit_variable.torch() will not carry gradients over, it just creates a new variable in Pytorch. In your setup, this means the inputs to the loss are completely detached from the BSDF evaluation and hence the MLP. In the tutorial I linked above, you'll se…

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@rajeshsharma-ai
Comment options

@rajeshsharma-ai
Comment options

@njroussel
Comment options

Answer selected by njroussel
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants