Replies: 2 comments 4 replies
-
Hello, there are a number of cool approaches to make NeRF reconstructions more plausible. RegNeRF and RefNeRF impose surface priors that are quite good at preventing floaters -- especially when only few training viewpoints are available. Depth supervision in general (e.g. when a depth camera is used) can also help a ton. Another option is to directly optimize for a mesh such as done in https://github.com/NVlabs/nvdiffrec. This latter approach might not scale to scene sizes you're interested in, but probably worth a closer look regardlessly. Cheers! |
Beta Was this translation helpful? Give feedback.
-
Thank you so much for your amazing work on this, @Tom94. At least on-screen, I'm seeing results far better than my old team was able to achieve with a photogrammetry -> {human ZBrush expert spending 3h} pipeline. This dataset was captured in a custom-built 70 camera rig with LED panel lighting: testbed_tlafvy6qW5.mp4As you can see, it does a remarkable job, once you get past the junk cloud surrounding the subject. I understand that you're not able to give us an A -> B -> C to a riggable avatar, so I won't ask. 😉 However, one of the things that would get me/us a lot closer to a 👍 is if we could somehow specify a bounding box within the cloud volume so that I could dispel the junk. Is that possible? It seems like this has to be a set of parameters that could be passed in to COLMAP, no? Is there a way to switch the mouse movement from rotate to drag? Rotate + zoom works, but gimbal lock isn't optimal. Is there a way to tell testbed to maintain highest quality at the sacrifice of frame rate? I have to shrink the window to a postage stamp to see the subject in high detail; if I maximize the window, it drops the quality. Makes sense as a default, but would be A+ if we can control it. Happy to share some human datasets if it would be helpful! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, my main goal is to retrieve the underlying geometry of wild scenes, texture is not needed. This implementation produces fantastic results when the task is to render new views, but I haven't managed to export good meshes.
I understand that NeRF focuses on returning good views, not accurate geometry. Could i modify a nerf to get better meshes? E.g. discard color loss, focus on density.
Also, if you know, could you recommend other architecures/techniques that are appropiate for this task? -retrieving 3d geometry of wild scenes, like streets with cars. (I'm mostly looking for options that already have implementations)
Beta Was this translation helpful? Give feedback.
All reactions