Hello,
Thank you for sharing the HumanTOMATO repository and accompanying paper. I've noticed a potential misalignment between the described approach in the paper and the provided code implementation.
In the paper, you mention using a VQ-VAE for body and hand encoding/decoding. However, the repository only includes implementations for various types of VAEs, with the primary approach appearing to use a Transformer-based traditional VAE without any vector quantization or a learned codebook. The implementationa actually appears identical to the TEMOS paper. Could you clarify why there is this difference, and confirm whether vector quantization was omitted intentionally in the provided implementation?
Additionally, I am still looking forward to the release of the inference code, as it would greatly help the community in testing and exploring the model.
Thank you for your time and for sharing your work with the community!
Hello,
Thank you for sharing the HumanTOMATO repository and accompanying paper. I've noticed a potential misalignment between the described approach in the paper and the provided code implementation.
In the paper, you mention using a VQ-VAE for body and hand encoding/decoding. However, the repository only includes implementations for various types of VAEs, with the primary approach appearing to use a Transformer-based traditional VAE without any vector quantization or a learned codebook. The implementationa actually appears identical to the TEMOS paper. Could you clarify why there is this difference, and confirm whether vector quantization was omitted intentionally in the provided implementation?
Additionally, I am still looking forward to the release of the inference code, as it would greatly help the community in testing and exploring the model.
Thank you for your time and for sharing your work with the community!