-
Notifications
You must be signed in to change notification settings - Fork 3
Description
Hi, thank you for releasing this great work and the repository.
I would like to clarify an issue regarding SDF generation.
The training SDFs are not provided, so I am currently generating them myself.
As a sanity check, I am first following the same SDF generation procedure used for the test dataset (AlignSDF), for which SDFs are provided.
However, when reproducing this process, I consistently observe strong artifacts after SDF-to-mesh reconstruction via marching cubes (e.g., fragmented components and floating surfaces, see attached images).
I understand that releasing the full SDF generation code may be difficult.
In that case, could you please provide guidance on the expected SDF data representation?
In particular:
- What is the expected hand/object mesh format for SDF generation (OBJ conventions, face indexing, watertight/manifold requirements)?
- NOTE: I used watertight mesh faces to both hand and object.
- What dtype/precision is used for vertices and faces, and are there any assumptions on numerical accuracy?
- Are there any normalization or preprocessing steps applied (e.g., centering, scaling, unit conventions, or mesh cleanup)?
Any clarification on how the hand/object SDFs are generated or stored would be extremely helpful.
Thank you very much for your time.
[Left: provided object SDF to mesh / Right: generated (by me) object SDF to mesh / using same marching cubes algorithm]

