Add interface for depth in both forward rendering and backward propagation#5
Add interface for depth in both forward rendering and backward propagation#5ingra14m wants to merge 4 commits intographdeco-inria:mainfrom
Conversation
|
Should there be a second part of gradient affected by the depth loss, the dL_dtz in L264 is also affected by the dL_depths? |
Added submodule license
2eb32ea to
8fa430b
Compare
Hi,thanks for your work, have you transformed the render depths to point3ds? I test your latest branch on datasets bicycle mip360-datasets, it seems the render depths have some erros, and multi views's render depths are not consistent. Looking forward to your reply, thanks! @ingra14m |
|
If I remember correctly, first, I set the parts where alpha=0 to be black. Second, since the depth directly from Blender shows deeper depths in blacker colors, I used |
What's the status of this branch? Does including the depth information improve the splat fitting or rendering? |
|
Hi @arcman7, from my perspective, I think depth can not improve the rendering quality of 3D Gaussian splatting. The geometry enhancement cannot lead to a better rendering quality. |
Ah okay, thanks for the heads up - it was looking really hopeful when I was going through all of the related discussion threads and experiments people had setup |
|
Hey btw, I wanted to ask you a quick question, and I didn't think creating another issue on this busy repo was the way to go; is there a way to render a pretrained gaussian model without setting up training gradient tensors in cuda? I'm not trying to improve or modify this pre-existing gaussian splat model, I just want to rasterize it from it's various camera view points but using the existing py-torch rasterization methods in this repo |
Hi, |
Hi, |







Test in self-defined dataset with GT-depth


Without depth loss:
With depth loss
