-
Hi, Thank you for the awesome work with Mitsuba 3! I was wondering how we can profile render times.
I know for a fact that
Is there a better way to do this profiling? Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @AakashKT There's a feature called If the kernel runtimes are similiar for the two |
Beta Was this translation helpful? Give feedback.
Hi @AakashKT
There's a feature called
KernelHistory
which allows us to get the actual runtimes of the CUDA/Optix kernels. I've given a brief explanation on how to use it in this comment.If the kernel runtimes are similiar for the two
mi.render
calls then the only difference is the time it takes to trace through the Python integrator to build the JIT graph.(Note: there should typically be two kernels permi.render
call)If the kernel runtimes differ widely, it most likely has to do with the nature of your integrators. The recommended steps to profile this any deeper would be to remove parts of you integrators to understand the runtime costs that they entail.