22
0

MDIT-Bench: Evaluating the Dual-Implicit Toxicity in Large Multimodal Models

Abstract

The widespread use of Large Multimodal Models (LMMs) has raised concerns about model toxicity. However, current research mainly focuses on explicit toxicity, with less attention to some more implicit toxicity regarding prejudice and discrimination. To address this limitation, we introduce a subtler type of toxicity named dual-implicit toxicity and a novel toxicity benchmark termed MDIT-Bench: Multimodal Dual-Implicit Toxicity Benchmark. Specifically, we first create the MDIT-Dataset with dual-implicit toxicity using the proposed Multi-stage Human-in-loop In-context Generation method. Based on this dataset, we construct the MDIT-Bench, a benchmark for evaluating the sensitivity of models to dual-implicit toxicity, with 317,638 questions covering 12 categories, 23 subcategories, and 780 topics. MDIT-Bench includes three difficulty levels, and we propose a metric to measure the toxicity gap exhibited by the model across them. In the experiment, we conducted MDIT-Bench on 13 prominent LMMs, and the results show that these LMMs cannot handle dual-implicit toxicity effectively. The model's performance drops significantly in hard level, revealing that these LMMs still contain a significant amount of hidden but activatable toxicity. Data are available atthis https URL.

View on arXiv
@article{jin2025_2505.17144,
  title={ MDIT-Bench: Evaluating the Dual-Implicit Toxicity in Large Multimodal Models },
  author={ Bohan Jin and Shuhan Qi and Kehai Chen and Xinyi Guo and Xuan Wang },
  journal={arXiv preprint arXiv:2505.17144},
  year={ 2025 }
}
Comments on this paper