ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18198
38
0

LTDA-Drive: LLMs-guided Generative Models based Long-tail Data Augmentation for Autonomous Driving

21 May 2025
Mahmut Yurt
Xin Ye
Yunsheng Ma
Jingru Luo
Abhirup Mallik
John Pauly
Burhaneddin Yaman
Liu Ren
ArXiv (abs)PDFHTML
Main:7 Pages
7 Figures
Bibliography:2 Pages
2 Tables
Appendix:1 Pages
Abstract

3D perception plays an essential role for improving the safety and performance of autonomous driving. Yet, existing models trained on real-world datasets, which naturally exhibit long-tail distributions, tend to underperform on rare and safety-critical, vulnerable classes, such as pedestrians and cyclists. Existing studies on reweighting and resampling techniques struggle with the scarcity and limited diversity within tail classes. To address these limitations, we introduce LTDA-Drive, a novel LLM-guided data augmentation framework designed to synthesize diverse, high-quality long-tail samples. LTDA-Drive replaces head-class objects in driving scenes with tail-class objects through a three-stage process: (1) text-guided diffusion models remove head-class objects, (2) generative models insert instances of the tail classes, and (3) an LLM agent filters out low-quality synthesized images. Experiments conducted on the KITTI dataset show that LTDA-Drive significantly improves tail-class detection, achieving 34.75\% improvement for rare classes over counterpart methods. These results further highlight the effectiveness of LTDA-Drive in tackling long-tail challenges by generating high-quality and diverse data.

View on arXiv
@article{yurt2025_2505.18198,
  title={ LTDA-Drive: LLMs-guided Generative Models based Long-tail Data Augmentation for Autonomous Driving },
  author={ Mahmut Yurt and Xin Ye and Yunsheng Ma and Jingru Luo and Abhirup Mallik and John Pauly and Burhaneddin Yaman and Liu Ren },
  journal={arXiv preprint arXiv:2505.18198},
  year={ 2025 }
}
Comments on this paper