ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.11472
12
0

On the Natural Robustness of Vision-Language Models Against Visual Perception Attacks in Autonomous Driving

13 June 2025
Pedram MohajerAnsari
Amir Salarpour
Michael Kuhr
Siyu Huang
Mohammad Hamad
Sebastian Steinhorst
Habeeb Olufowobi
Mert D. Pesé
    AAML
ArXiv (abs)PDFHTML
Main:9 Pages
6 Figures
Bibliography:4 Pages
5 Tables
Abstract

Autonomous vehicles (AVs) rely on deep neural networks (DNNs) for critical tasks such as traffic sign recognition (TSR), automated lane centering (ALC), and vehicle detection (VD). However, these models are vulnerable to attacks that can cause misclassifications and compromise safety. Traditional defense mechanisms, including adversarial training, often degrade benign accuracy and fail to generalize against unseen attacks. In this work, we introduce Vehicle Vision Language Models (V2LMs), fine-tuned vision-language models specialized for AV perception. Our findings demonstrate that V2LMs inherently exhibit superior robustness against unseen attacks without requiring adversarial training, maintaining significantly higher accuracy than conventional DNNs under adversarial conditions. We evaluate two deployment strategies: Solo Mode, where individual V2LMs handle specific perception tasks, and Tandem Mode, where a single unified V2LM is fine-tuned for multiple tasks simultaneously. Experimental results reveal that DNNs suffer performance drops of 33% to 46% under attacks, whereas V2LMs maintain adversarial accuracy with reductions of less than 8% on average. The Tandem Mode further offers a memory-efficient alternative while achieving comparable robustness to Solo Mode. We also explore integrating V2LMs as parallel components to AV perception to enhance resilience against adversarial threats. Our results suggest that V2LMs offer a promising path toward more secure and resilient AV perception systems.

View on arXiv
@article{mohajeransari2025_2506.11472,
  title={ On the Natural Robustness of Vision-Language Models Against Visual Perception Attacks in Autonomous Driving },
  author={ Pedram MohajerAnsari and Amir Salarpour and Michael Kühr and Siyu Huang and Mohammad Hamad and Sebastian Steinhorst and Habeeb Olufowobi and Mert D. Pesé },
  journal={arXiv preprint arXiv:2506.11472},
  year={ 2025 }
}
Comments on this paper