Baking Gaussian Splatting into Diffusion Denoiser for Fast
and Scalable Single-stage Image-to-3D Generation and Reconstruction
This is an implementation of our work "Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation and Reconstruction ". Our DiffusionGS is single-stage and does not rely on 2D multi-view diffusion model. DiffusionGS can be applied to single-view 3D object generation and scene reconstruction without using depth estimator in ~6 seconds. If you find our repo useful, please give it a star ⭐ and consider citing our paper. Thank you :)
- 2025.06.25 : Our paper has been accepted by ICCV 2025. Code will be released. Stay tuned! 🎉 🎊
- 2024.11.22 : Our project page has been built up. Feel free to check the video and interactive generation results on the project page.
- 2024.11.21 : We upload the prompt image and our generation results to our hugging face dataset. Feel free to download and make a comparison with your method. 🤗
- 2024.11.20 : Our paper is on arxiv now. 🚀
@article{cai2024baking,
title={Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation and Reconstruction},
author={Yuanhao Cai and He Zhang and Kai Zhang and Yixun Liang and Mengwei Ren and Fujun Luan and Qing Liu and Soo Ye Kim and Jianming Zhang and Zhifei Zhang and Yuqian Zhou and Yulun Zhang and Xiaokang Yang and Zhe Lin and Alan Yuille},
journal={arXiv preprint arXiv:2411.14384},
year={2024}
}