-
Notifications
You must be signed in to change notification settings - Fork 49
Open
Description
I am not sure this question has been asked before. I searched the issues by keyword and could not find anything. I want to use this chamfer distance as the loss to train a network (more specifically, a pointnet-like autoencoder).
Currently, I am using it like this (based on the python version):
import dist_chamfer_2D
loss_chamfer = dist_chamfer_2D.chamfer_2DDist()
dist1, dist2, idx1, idx2 = loss_chamfer(x.permute(0, 2, 1), dec_y.permute(0, 2, 1))
loss = (dist1.min(dim=0)[0].mean()) + (dist2.min(dim=0)[0].mean())
Is this the correct way of using it?
However, the reconstructed result does not look good. I also tried to define the loss as:
loss = torch.sum(dist1) + torch.sum(dist2)
Which had a better overall qualitative result but still not as I expected.
The problem should not be hard, and I am trying to learn a representation for a simple 2D/3D point cloud (composed of squares and circles ).
Example below (blue is the original, red is the decoder output):
Metadata
Metadata
Assignees
Labels
No labels