-
Hey folks, so I've been playing around writing a custom integrator for some time. My idea here is just to expose an AO integrator (similar to the old school one in mitsuba 0.6 (https://github.com/mitsuba-renderer/mitsuba/blob/master/src/integrators/direct/ao.cpp). I managed to get something working with the scalar_rgb but, as the nice warning we get, it's extremly slow, and its taking a while to generate a fairly low-res image. Here is the scalar version:
Film is 512x512 and fixed sample_count to 1. But yeah, this is taking a long time. So if I turn this into the cuda_ad_rgb variant, I start getting problems. It seems that I'm getting an input array of directions and if I pass that to the scene.ray_test it also returns an array of results. But I'm unsure what the return type for this "sample" should be? An array of tuples? Or what exacly? Anyways, sorry for the silly question, this is probably all because my ignorance in Python. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Hi You might want to go through the tutorials we have, there even is one that implements a basic AO renderer (albeit without the
Here's the API reference for |
Beta Was this translation helpful? Give feedback.
-
Thanks @njroussel, following the tutorial for rendering managed to get this working. Honestly, I kind of copied the same logic in there, I'm still a bit confused about the vectorization stuff, I'm assuming the sample_inner_loop ends up being called n times (n being the number of rays) As a reference, this is how this ended up looking (for anyone that may be trying the same) import sys
import mitsuba as mi
import drjit
mi.set_variant('cuda_ad_rgb')
class AOIntegrator(mi.SamplingIntegrator):
def __init__(self, props):
mi.SamplingIntegrator.__init__(self, props)
def sample(self, scene, sampler, ray, medium, active):
ao_samples = 1024
query = scene.ray_intersect(ray)
res = mi.Float(1)
@drjit.syntax
def sample_inner_loop(result, si, sampler, count):
i = mi.UInt32(0)
while (si.is_valid() & (i < count)):
rnd = sampler.next_2d()
wo = mi.warp.square_to_uniform_hemisphere(rnd)
wo = si.sh_frame.to_world(wo)
ao_ray = si.spawn_ray(wo)
ao_ray.maxt = 1
result[~scene.ray_test(ao_ray)] += 1.0
i += 1
return result / count
res = sample_inner_loop( res, query, sampler, ao_samples)
return res, False, []
mi.register_integrator("ao", lambda props: AOIntegrator(props)) |
Beta Was this translation helpful? Give feedback.
Thanks @njroussel, following the tutorial for rendering managed to get this working. Honestly, I kind of copied the same logic in there, I'm still a bit confused about the vectorization stuff, I'm assuming the sample_inner_loop ends up being called n times (n being the number of rays)
As a reference, this is how this ended up looking (for anyone that may be trying the same)