Skip to content

Reconstruction Error Evaluation #4

@yulgi

Description

@yulgi

HI,
Thank you for your excellent work. Recently I run your code to reproduce the experiment results.
But I found ATE metric is very good in Replica datasets, but the reconstruction error is quite large. And I found codes in these lines are inconsistent with the method described in paper.

for submap in submap_list:
pts_mask = torch.logical_and((pts[..., :] > submap.boundary[0]).all(dim=-1),
(pts[..., :] < submap.boundary[1]).all(dim=-1))
pts_mask = torch.logical_and(pts_mask, torch.logical_xor(pre_mask, pts_mask))
#pre_mask = torch.logical_or(pre_mask, pts_mask)
pre_mask = pts_mask

Image

Therefore I makes pre_mask = torch.logical_or(pre_mask, pts_mask) , and the reconstruction error performs better but not enough. The result of reconstruction error evaluation in Replica office0 scene is not well and is showed below .

python src/tools/eval_recon.py --rec_mesh $OUTPUT_FOLDER/mesh/final_mesh_eval_rec_culled.ply --gt_mesh $GT_MESH -2d -3d
accuracy:  2.4982285506690265
completion:  1.2146457996535176
completion ratio:  99.44733333333333
Depth L1:  1.6115719452500343

I am confused and want to know what factors caused such a result. Could you please help me check the result and figure out the reason?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions