Skip to content

Artifacts when reproducing SDF generation for training data (following AlignSDF) #19

@taeyunwoo

Description

@taeyunwoo

Hi, thank you for releasing this great work and the repository.

I would like to clarify an issue regarding SDF generation.
The training SDFs are not provided, so I am currently generating them myself.
As a sanity check, I am first following the same SDF generation procedure used for the test dataset (AlignSDF), for which SDFs are provided.

However, when reproducing this process, I consistently observe strong artifacts after SDF-to-mesh reconstruction via marching cubes (e.g., fragmented components and floating surfaces, see attached images).

I understand that releasing the full SDF generation code may be difficult.
In that case, could you please provide guidance on the expected SDF data representation?

In particular:

  • What is the expected hand/object mesh format for SDF generation (OBJ conventions, face indexing, watertight/manifold requirements)?
    • NOTE: I used watertight mesh faces to both hand and object.
  • What dtype/precision is used for vertices and faces, and are there any assumptions on numerical accuracy?
  • Are there any normalization or preprocessing steps applied (e.g., centering, scaling, unit conventions, or mesh cleanup)?

Any clarification on how the hand/object SDFs are generated or stored would be extremely helpful.
Thank you very much for your time.

[Left: provided object SDF to mesh / Right: generated (by me) object SDF to mesh / using same marching cubes algorithm]
Image

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions