-
Hello! First of all, thank you for the amazing work and for providing a great platform for asking questions. The other day I came across a tutorial Optimizing a mesh using a Differentiable Renderer, where they show mesh optimization using Kaolin. During optimization, they also calculate the laplacian loss. Combining kaolin** and mitsuba 3, I wanted to test a similar approach, however unfortunately I could not manage to do it. Has anyone tried out a similar approach? What would be the best/easiest way to calculate the laplacian loss (in mitsuba) during vertex position optimization? I would appreciate any hints/tips--thank you! ** Similar to the linked tutorial, I tried to calculate laplacian loss (using uniform_laplacian), however in my case, the vertex positions (and actually all the optimization steps) are handled by mitsuba. The optimization loop has a lot of similarities to the provided mitsuba tutorial. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 5 replies
-
Hi @sapo17 This is indeed a bit complicated in Mitsuba. Here are a few things that come to mind:
|
Beta Was this translation helpful? Give feedback.
-
Hi, def matmul(a: mi.TensorXf, b: mi.TensorXf) -> mi.TensorXf:
assert len(a.shape) == 2
assert a.shape[1] == b.shape[0]
N = a.shape[0]
M = b.shape[1]
K = b.shape[0]
i, j = dr.arange(mi.UInt, N), dr.arange(mi.UInt, M)
j, i = dr.meshgrid(j, i)
print(f"{i=}")
print(f"{j=}")
k = mi.UInt(0)
dst = mi.Float(0)
loop = mi.Loop(name="matmul", state=lambda: (k, dst))
loop.set_max_iterations(K)
while loop(k < K):
dst += dr.gather(mi.Float, a.array, i * a.shape[1] + k) * dr.gather(
mi.Float, b.array, k * b.shape[1] + j
)
k += 1
return mi.TensorXf(dst, shape=(N, M))
def laplacian_uniform(verts: mi.Float, faces: mi.UInt) -> mi.TensorXf:
verts: mi.Point3f = dr.unravel(mi.Point3f, verts)
faces: mi.Vector3u = dr.unravel(mi.Vector3u, faces)
n_verts = dr.shape(verts)[-1]
adj = dr.zeros(mi.Float, n_verts * n_verts)
w = -1
def scatter_weights(i, j):
dr.scatter(adj, w, i * n_verts + j)
dr.scatter(adj, w, j * n_verts + i)
scatter_weights(faces.x, faces.y)
scatter_weights(faces.y, faces.z)
scatter_weights(faces.x, faces.z)
i = dr.arange(mi.UInt, n_verts)
j = dr.arange(mi.UInt, n_verts)
i, j = dr.meshgrid(i, j)
dr.scatter_reduce(
dr.ReduceOp.Add,
adj,
-dr.gather(mi.Float, adj, i * n_verts + j),
i * n_verts + i,
)
return mi.TensorXf(adj, shape=(n_verts, n_verts)) This implementation only calculates the uniform Laplacian however I think this would be extendable to calculate the cosine Laplacian matrix. |
Beta Was this translation helpful? Give feedback.
Hi @sapo17
This is indeed a bit complicated in Mitsuba. Here are a few things that come to mind:
matrix_mutiply(a: mi.TensorXf, b: mi:TensorXf)
using Dr.Jit operations only (to preserve automatic differentiation).unfirom_laplacian
is awful to do in Mitsuba. Since it's only computed once per mesh/topology you can safely build it with some other framework and convert it to a Mitsuba/DrJit type.mi.traverse()
on a Mesh, you'll getvertex_positions
,faces
as flattened arrays. Internally thevertex_positions
parameter is a bit un-co…