ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.13913
91
0

A Black-Box Evaluation Framework for Semantic Robustness in Bird's Eye View Detection

18 December 2024
Fu Lee Wang
Yanghao Zhang
Xiangyu Yin
Guangliang Cheng
Zeyu Fu
Xiaowei Huang
Wenjie Ruan
    AAML
ArXivPDFHTML
Abstract

Camera-based Bird's Eye View (BEV) perception models receive increasing attention for their crucial role in autonomous driving, a domain where concerns about the robustness and reliability of deep learning have been raised. While only a few works have investigated the effects of randomly generated semantic perturbations, aka natural corruptions, on the multi-view BEV detection task, we develop a black-box robustness evaluation framework that adversarially optimises three common semantic perturbations: geometric transformation, colour shifting, and motion blur, to deceive BEV models, serving as the first approach in this emerging field. To address the challenge posed by optimising the semantic perturbation, we design a smoothed, distance-based surrogate function to replace the mAP metric and introduce SimpleDIRECT, a deterministic optimisation algorithm that utilises observed slopes to guide the optimisation process. By comparing with randomised perturbation and two optimisation baselines, we demonstrate the effectiveness of the proposed framework. Additionally, we provide a benchmark on the semantic robustness of ten recent BEV models. The results reveal that PolarFormer, which emphasises geometric information from multi-view images, exhibits the highest robustness, whereas BEVDet is fully compromised, with its precision reduced to zero.

View on arXiv
@article{wang2025_2412.13913,
  title={ A Black-Box Evaluation Framework for Semantic Robustness in Bird's Eye View Detection },
  author={ Fu Wang and Yanghao Zhang and Xiangyu Yin and Guangliang Cheng and Zeyu Fu and Xiaowei Huang and Wenjie Ruan },
  journal={arXiv preprint arXiv:2412.13913},
  year={ 2025 }
}
Comments on this paper