Inverse rendering speed is very slow #1048
-
Hi! I modified the "Gradient-based optimization" tutorial a bit and made the scene to be handled "Material preview" (in the Gallery of the documentation). I set the optimization parameter to "bsdf-diffuse.reflectance.value" and let it optimize the scene so that it approaches the reference image (all red images). However, the execution speed suddenly slowed down, and in the worst case, "Critical Dr.Jit compiler failure: cuda_check(): API error 0700 (CUDA_ERROR_ILLEGAL_ADDRESS): "an illegal memory access was encountered""". Am I doing something wrong with my implementation? import mitsuba as mi
import drjit as dr
import matplotlib.pyplot as plt
import numpy as np
sv = mi.variants()
print(sv)
mi.set_variant("cuda_ad_rgb")
scene = mi.load_file("scene.xml")
#Load reference image(samples per pixel = 512)
bitmap_ref = mi.Bitmap('basecolor_ref/C3_Red_ref.jpg').convert(mi.Bitmap.PixelFormat.RGB, mi.Struct.Type.Float32, srgb_gamma=False)
image_ref = mi.TensorXf(bitmap_ref)
params= mi.traverse(scene)
key = "bsdf-matpreview.base_color.value"
param_ref = mi.Color3f(params[key])
params.update()
opt = mi.ad.Adam(lr=0.05)
opt[key] = params[key]
params.update(opt)
image_init= mi.render(scene,spp = 512)
def mse(image):
print(dr.mean(dr.sqr(image - image_ref)))
return dr.mean(dr.sqr(image - image_ref))
iteration_count = 50
errors = []
for it in range(iteration_count):
image = mi.render(scene, params, spp=4)
loss = mse(image)
dr.backward(loss)
opt.step()
opt[key] = dr.clamp(opt[key], 0.0, 1.0)
params.update(opt)
err_ref = dr.sum(dr.sqr(param_ref - params[key]))
#print(f"Iteration {it:02d}: parameter error = {err_ref[0]:6f}", end='\r')
errors.append(err_ref)
print('\nOptimization complete.')
image_final = mi.render(scene, spp=512)
mi.util.convert_to_bitmap(image_final)
plt.axis("off")
plt.imshow(image_final ** (1.0 / 2.2))
plt.show() |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Sorry, this got lost in my GitHub notifications. I don't immediately see anything wrong here. If you set the In addition, you might want to use another integrator. I'm not sure what is used here, but you should be able to use an integrator that defines an explicit adjoint pass like If this persist please open an issue with all your information, and a minimal reproducer. |
Beta Was this translation helpful? Give feedback.
Hi @Bambootree0818
Sorry, this got lost in my GitHub notifications.
I don't immediately see anything wrong here. If you set the
drjit
log level toInfo
, you'll be able to see all the kernel launches. At every iteration you should be seeing the same set of kernels. If they suddenly change and that's when you see a slowdown, then something terribly wrong is happening.In addition, you might want to use another integrator. I'm not sure what is used here, but you should be able to use an integrator that defines an explicit adjoint pass like
prb
. Not that it should change the correctness of the code, but they are more robust (we test them extensively.).If this persist please open an issue wit…