17
0

NOVA3D: Normal Aligned Video Diffusion Model for Single Image to 3D Generation

Main:5 Pages
7 Figures
Bibliography:1 Pages
3 Tables
Appendix:2 Pages
Abstract

3D AI-generated content (AIGC) has made it increasingly accessible for anyone to become a 3D content creator. While recent methods leverage Score Distillation Sampling to distill 3D objects from pretrained image diffusion models, they often suffer from inadequate 3D priors, leading to insufficient multi-view consistency. In this work, we introduce NOVA3D, an innovative single-image-to-3D generation framework. Our key insight lies in leveraging strong 3D priors from a pretrained video diffusion model and integrating geometric information during multi-view video fine-tuning. To facilitate information exchange between color and geometric domains, we propose the Geometry-Temporal Alignment (GTA) attention mechanism, thereby improving generalization and multi-view consistency. Moreover, we introduce the de-conflict geometry fusion algorithm, which improves texture fidelity by addressing multi-view inaccuracies and resolving discrepancies in pose alignment. Extensive experiments validate the superiority of NOVA3D over existing baselines.

View on arXiv
@article{yang2025_2506.07698,
  title={ NOVA3D: Normal Aligned Video Diffusion Model for Single Image to 3D Generation },
  author={ Yuxiao Yang and Peihao Li and Yuhong Zhang and Junzhe Lu and Xianglong He and Minghan Qin and Weitao Wang and Haoqian Wang },
  journal={arXiv preprint arXiv:2506.07698},
  year={ 2025 }
}
Comments on this paper