ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24521
36
1

UniGeo: Taming Video Diffusion for Unified Consistent Geometry Estimation

30 May 2025
Yang-tian Sun
Xin Yu
Zehuan Huang
Yi-Hua Huang
Yuan-Chen Guo
Ziyi Yang
Yan-Pei Cao
Xiaojuan Qi
    DiffMVGenMDE
ArXiv (abs)PDFHTML
Main:8 Pages
14 Figures
Bibliography:3 Pages
4 Tables
Appendix:4 Pages
Abstract

Recently, methods leveraging diffusion model priors to assist monocular geometric estimation (e.g., depth and normal) have gained significant attention due to their strong generalization ability. However, most existing works focus on estimating geometric properties within the camera coordinate system of individual video frames, neglecting the inherent ability of diffusion models to determine inter-frame correspondence. In this work, we demonstrate that, through appropriate design and fine-tuning, the intrinsic consistency of video generation models can be effectively harnessed for consistent geometric estimation. Specifically, we 1) select geometric attributes in the global coordinate system that share the same correspondence with video frames as the prediction targets, 2) introduce a novel and efficient conditioning method by reusing positional encodings, and 3) enhance performance through joint training on multiple geometric attributes that share the same correspondence. Our results achieve superior performance in predicting global geometric attributes in videos and can be directly applied to reconstruction tasks. Even when trained solely on static video data, our approach exhibits the potential to generalize to dynamic video scenes.

View on arXiv
@article{sun2025_2505.24521,
  title={ UniGeo: Taming Video Diffusion for Unified Consistent Geometry Estimation },
  author={ Yang-Tian Sun and Xin Yu and Zehuan Huang and Yi-Hua Huang and Yuan-Chen Guo and Ziyi Yang and Yan-Pei Cao and Xiaojuan Qi },
  journal={arXiv preprint arXiv:2505.24521},
  year={ 2025 }
}
Comments on this paper