ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.23793
24
0

USB: A Comprehensive and Unified Safety Evaluation Benchmark for Multimodal Large Language Models

26 May 2025
Baolin Zheng
Guanlin Chen
Hongqiong Zhong
Qingyang Teng
Yingshui Tan
Zhendong Liu
Weixun Wang
Jiaheng Liu
Jian Yang
Huiyun Jing
Jincheng Wei
Wenbo Su
Xiaoyong Zhu
Bo Zheng
Kaifu Zhang
    ELM
ArXiv (abs)PDFHTML
Main:9 Pages
10 Figures
Bibliography:3 Pages
3 Tables
Appendix:9 Pages
Abstract

Despite their remarkable achievements and widespread adoption, Multimodal Large Language Models (MLLMs) have revealed significant security vulnerabilities, highlighting the urgent need for robust safety evaluation benchmarks. Existing MLLM safety benchmarks, however, fall short in terms of data quality and coverge, and modal risk combinations, resulting in inflated and contradictory evaluation results, which hinders the discovery and governance of security concerns. Besides, we argue that vulnerabilities to harmful queries and oversensitivity to harmless ones should be considered simultaneously in MLLMs safety evaluation, whereas these were previously considered separately. In this paper, to address these shortcomings, we introduce Unified Safety Benchmarks (USB), which is one of the most comprehensive evaluation benchmarks in MLLM safety. Our benchmark features high-quality queries, extensive risk categories, comprehensive modal combinations, and encompasses both vulnerability and oversensitivity evaluations. From the perspective of two key dimensions: risk categories and modality combinations, we demonstrate that the available benchmarks -- even the union of the vast majority of them -- are far from being truly comprehensive. To bridge this gap, we design a sophisticated data synthesis pipeline that generates extensive, high-quality complementary data addressing previously unexplored aspects. By combining open-source datasets with our synthetic data, our benchmark provides 4 distinct modality combinations for each of the 61 risk sub-categories, covering both English and Chinese across both vulnerability and oversensitivity dimensions.

View on arXiv
@article{zheng2025_2505.23793,
  title={ USB: A Comprehensive and Unified Safety Evaluation Benchmark for Multimodal Large Language Models },
  author={ Baolin Zheng and Guanlin Chen and Hongqiong Zhong and Qingyang Teng and Yingshui Tan and Zhendong Liu and Weixun Wang and Jiaheng Liu and Jian Yang and Huiyun Jing and Jincheng Wei and Wenbo Su and Xiaoyong Zhu and Bo Zheng and Kaifu Zhang },
  journal={arXiv preprint arXiv:2505.23793},
  year={ 2025 }
}
Comments on this paper