Proper way to incorporate transforms into scenes when using cuda #622
-
I notice that in the tutorials, it is expected that we explicitly import This works for 3D transforms (4x4 matrices), but I feel like this might somehow be the wrong approach, since I cannot follow a basic example on transforming a texture using the 'to_uv' attribute, which accepts a ScalarTransform3f, or so it seems. When I provide an object of the type mi.scalar_rgb.ScalarTransform3f, I get the error |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
|
Beta Was this translation helpful? Give feedback.
The scene parsing is all done without the JIT and on the CPU. It's a scalar CPU process. Having it be able to deal with JIT types would make this piece of code variant-dependent. In addition, specifically for cuda variants, an implicit and costly GPU->CPU memory transfer would take place.
The current solution, although verbose, makes memory migrations explicit and keeps the scene loading variant independent.
I don't know if your linter is being aggressive, but the
mitsuba/stubs/
folder does have information forscalar_rgb
in thescalar_rgb.pyi
file. If this is truly annoying I'd welcome any information about why this is currently not working in your setup.I litterally just noticed …