-
Notifications
You must be signed in to change notification settings - Fork 121
Open
Description
Hi,
I have successfully trained ConvONet on my own object datasets before with nice results!
I have used the sample_mesh.py script from ONet to generate pointcloud.npz and points.npz (occupancy points) files.
I would now like to train the network to reconstruct very large scenes, for which I have ground truth meshes for some of them. How do I go about this?
Can I simply provide one points.npz and pointcloud.npz file per scene? How do I make sure to have enough occupancy samples per crop? Should I simply make sure to have 100k occupancy points per crop defined by voxel_size * resolution?
Or do I need to do the cropping myself?
Kind regards
seb2s
Metadata
Metadata
Assignees
Labels
No labels