89
0

Adversarial Examples in Environment Perception for Automated Driving (Review)

Abstract

The renaissance of deep learning has led to the massive development of automated driving. However, deep neural networks are vulnerable to adversarial examples. The perturbations of adversarial examples are imperceptible to human eyes but can lead to the false predictions of neural networks. It poses a huge risk to artificial intelligence (AI) applications for automated driving. This survey systematically reviews the development of adversarial robustness research over the past decade, including the attack and defense methods and their applications in automated driving. The growth of automated driving pushes forward the realization of trustworthy AI applications. This review lists significant references in the research history of adversarial examples.

View on arXiv
@article{yan2025_2504.08414,
  title={ Adversarial Examples in Environment Perception for Automated Driving (Review) },
  author={ Jun Yan and Huilin Yin },
  journal={arXiv preprint arXiv:2504.08414},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.