ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.11532
24
0

Revisiting Adversarial Perception Attacks and Defense Methods on Autonomous Driving Systems

14 May 2025
Cheng Chen
Yuhong Wang
Nafis S Munir
Xiangwei Zhou
Xugui Zhou
    AAML
ArXivPDFHTML
Abstract

Autonomous driving systems (ADS) increasingly rely on deep learning-based perception models, which remain vulnerable to adversarial attacks. In this paper, we revisit adversarial attacks and defense methods, focusing on road sign recognition and lead object detection and prediction (e.g., relative distance). Using a Level-2 production ADS, OpenPilot by Comma...ai, and the widely adopted YOLO model, we systematically examine the impact of adversarial perturbations and assess defense techniques, including adversarial training, image processing, contrastive learning, and diffusion models. Our experiments highlight both the strengths and limitations of these methods in mitigating complex attacks. Through targeted evaluations of model robustness, we aim to provide deeper insights into the vulnerabilities of ADS perception systems and contribute guidance for developing more resilient defense strategies.

View on arXiv
@article{chen2025_2505.11532,
  title={ Revisiting Adversarial Perception Attacks and Defense Methods on Autonomous Driving Systems },
  author={ Cheng Chen and Yuhong Wang and Nafis S Munir and Xiangwei Zhou and Xugui Zhou },
  journal={arXiv preprint arXiv:2505.11532},
  year={ 2025 }
}
Comments on this paper