Feasibility of generating pointcloud data for export to something like Meshroom? #157
Replies: 3 comments 6 replies
-
I was looking into this to see if it'd be feasible, and I think maybe |
Beta Was this translation helpful? Give feedback.
-
You can get the vertex positions of the marching cubes mesh as a numpy array via the python bindings, and you can get depth renderings through the built-in depth render mode. Beyond that, my guess is unfortunately as good as yours -- I'm not familiar with Meshroom. |
Beta Was this translation helpful? Give feedback.
-
I used open3D to make a post processing script that cleans up some of the noise and re-meshes the clean point cloud. I import the transforms.json from colmap to get camera locations, then use open3D hidden_point_removal. Join all of those new meshes together, and you are pretty good. I also do some filtering to get just the largest mesh. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
It seems that the marching cubes algorithm doesn't do a great job at reconstructing meshes, and I was wondering if external tools could be leveraged for that purpose. Meshroom, for example, has a whole pipeline designed to create a point cloud and depth map from a Structure From Motion step. With instant-ngp, however, would it be possible to train the model and then export a point cloud, vertex color data, and a depth map? Perhaps then something like Meshroom's meshing node could do a good job reconstructing the geometry.
Beta Was this translation helpful? Give feedback.
All reactions