ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.00437
52
0

ADGaussian: Generalizable Gaussian Splatting for Autonomous Driving with Multi-modal Inputs

1 April 2025
Qi Song
Chenghong Li
Haotong Lin
Sida Peng
Rui Huang
    3DGS
ArXivPDFHTML
Abstract

We present a novel approach, termed ADGaussian, for generalizable street scene reconstruction. The proposed method enables high-quality rendering from single-view input. Unlike prior Gaussian Splatting methods that primarily focus on geometry refinement, we emphasize the importance of joint optimization of image and depth features for accurate Gaussian prediction. To this end, we first incorporate sparse LiDAR depth as an additional input modality, formulating the Gaussian prediction process as a joint learning framework of visual information and geometric clue. Furthermore, we propose a multi-modal feature matching strategy coupled with a multi-scale Gaussian decoding model to enhance the joint refinement of multi-modal features, thereby enabling efficient multi-modal Gaussian learning. Extensive experiments on two large-scale autonomous driving datasets, Waymo and KITTI, demonstrate that our ADGaussian achieves state-of-the-art performance and exhibits superior zero-shot generalization capabilities in novel-view shifting.

View on arXiv
@article{song2025_2504.00437,
  title={ ADGaussian: Generalizable Gaussian Splatting for Autonomous Driving with Multi-modal Inputs },
  author={ Qi Song and Chenghong Li and Haotong Lin and Sida Peng and Rui Huang },
  journal={arXiv preprint arXiv:2504.00437},
  year={ 2025 }
}
Comments on this paper