-
I am trying to render a scene from different viewpoints in a sphere. The resulting images will be used to train a NeRF on the scene. I am doing this for thousands of objects and hence I need the fastest/most efficient way to do it. I couldn't find any examples about using the batch sensor in the documentations what I am currently doing look like this.. def fibonacci_sphere(samples=100):
points = []
phi = np.pi * (3. - np.sqrt(5.)) # golden angle in radians
for i in range(samples):
y = 1 - (i / float(samples - 1)) * 2 # y goes from 1 to -1
radius = np.sqrt(1 - y * y) # radius at y
theta = phi * i # golden angle increment
x = np.cos(theta) * radius
z = np.sin(theta) * radius
points.append((x, y, z))
return np.array(points)
points_on_sph = fibonacci_sphere() * 6.5
for indx, origin in enumerate(points_on_sph):
sen = mi.load_dict({
'type': 'perspective',
'fov': 30,
'to_world': mi.ScalarTransform4f.look_at(
origin=origin,
target=[0, -0.75, 0],
up=[0, 1, 0]
),
'sampler': {
'type': 'multijitter',
'sample_count': 64
},
'film': {
'type': 'hdrfilm',
'width': 512,
'height': 512,
'rfilter': {
'type': 'tent',
},
'pixel_format': 'rgb',
"component_format": 'float32'
},
})
im = mi.render(scene, spp=64, sensor=sen)
im = mi.util.convert_to_bitmap(im)
im.write(f'{indx}.jpg')` but it is too slow to use on the whole dataset.. So, my questions are:
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
Hi @a-3wais Here's an example of the sensor_description = {
'type': 'batch',
}
for idx, origin in enumerate(points):
sensor_description[f"persp_{idx}"] = {
'type': 'perspective',
'fov': 45,
'to_world': transform_for_point(origin),
'film': (...),
}
sensor = mi.load_dict(sensor_description)
img = mi.render(scene, spp=64, sensor=sensor) This might shave off some runtime, namely the tracing/jiting of the A I can't really comment on the choice of sampler. I'd recommend benchmarking it yourself, and please share your results with us :). |
Beta Was this translation helpful? Give feedback.
-
Hi , I was wondering what if I need to render so many images only change one of scene parameters such as different geometry for the same scene which means that camera and other will be fixed , seems like in this situation , every iteration will be very slowly , render scene for so many times and make a total loss. Would you kindly like to give me some advice for rendering in that situation more faster? |
Beta Was this translation helpful? Give feedback.
Hi @a-3wais
Here's an example of the
batch
sensor:This might shave off some runtime, namely the tracing/jiting of the
render
function. However, you will still need to split the output images (thebatch
sensor stitches the different sensors together). There might be some performance trade-off here.A
direct
integrator is your best option for your set…