Skip to content

Very sparse voxels for input. #4

@LR32768

Description

@LR32768

Hi I am very interested in your work and I'm trying to retrain the chair parsing network. But after following all the instructions as README and using same arguments, my converged model behaves much worse on Shapenet v2. The visualization for the input X (voxels) fed into model distribution is like this
sparse
while the ground truth mesh is like this
Screenshot from 2019-09-02 15-28-13

Is this input sparsity intended or there is some version problem when releasing? I read the dataloader and find that such phenomenon comes from the occupancy grid voxelizer which only take mesh vertices into voxelize, is this as designed?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions