ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.00156
93
0

V3LMA: Visual 3D-enhanced Language Model for Autonomous Driving

30 April 2025
Jannik Lübberstedt
Esteban Rivera
Nico Uhlemann
Markus Lienkamp
    MLLM
ArXivPDFHTML
Abstract

Large Vision Language Models (LVLMs) have shown strong capabilities in understanding and analyzing visual scenes across various domains. However, in the context of autonomous driving, their limited comprehension of 3D environments restricts their effectiveness in achieving a complete and safe understanding of dynamic surroundings. To address this, we introduce V3LMA, a novel approach that enhances 3D scene understanding by integrating Large Language Models (LLMs) with LVLMs. V3LMA leverages textual descriptions generated from object detections and video inputs, significantly boosting performance without requiring fine-tuning. Through a dedicated preprocessing pipeline that extracts 3D object data, our method improves situational awareness and decision-making in complex traffic scenarios, achieving a score of 0.56 on the LingoQA benchmark. We further explore different fusion strategies and token combinations with the goal of advancing the interpretation of traffic scenes, ultimately enabling safer autonomous driving systems.

View on arXiv
@article{lübberstedt2025_2505.00156,
  title={ V3LMA: Visual 3D-enhanced Language Model for Autonomous Driving },
  author={ Jannik Lübberstedt and Esteban Rivera and Nico Uhlemann and Markus Lienkamp },
  journal={arXiv preprint arXiv:2505.00156},
  year={ 2025 }
}
Comments on this paper