ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.00192
225
3

MLLM-as-a-Judge for Image Safety without Human Labeling

31 December 2024
Zhenting Wang
Shuming Hu
Shiyu Zhao
Xiaowen Lin
F. Xu
Zhuowei Li
Ligong Han
Harihar Subramanyam
Li Chen
Jianfa Chen
Nan Jiang
Lingjuan Lyu
Shiqing Ma
Dimitris N. Metaxas
Ankit Jain
ArXivPDFHTML
Abstract

Image content safety has become a significant challenge with the rise of visual media on online platforms. Meanwhile, in the age of AI-generated content (AIGC), many image generation models are capable of producing harmful content, such as images containing sexual or violent material. Thus, it becomes crucial to identify such unsafe images based on established safety rules. Pre-trained Multimodal Large Language Models (MLLMs) offer potential in this regard, given their strong pattern recognition abilities. Existing approaches typically fine-tune MLLMs with human-labeled datasets, which however brings a series of drawbacks. First, relying on human annotators to label data following intricate and detailed guidelines is both expensive and labor-intensive. Furthermore, users of safety judgment systems may need to frequently update safety rules, making fine-tuning on human-based annotation more challenging. This raises the research question: Can we detect unsafe images by querying MLLMs in a zero-shot setting using a predefined safety constitution (a set of safety rules)? Our research showed that simply querying pre-trained MLLMs does not yield satisfactory results. This lack of effectiveness stems from factors such as the subjectivity of safety rules, the complexity of lengthy constitutions, and the inherent biases in the models. To address these challenges, we propose a MLLM-based method includes objectifying safety rules, assessing the relevance between rules and images, making quick judgments based on debiased token probabilities with logically complete yet simplified precondition chains for safety rules, and conducting more in-depth reasoning with cascaded chain-of-thought processes if necessary. Experiment results demonstrate that our method is highly effective for zero-shot image safety judgment tasks.

View on arXiv
@article{wang2025_2501.00192,
  title={ MLLM-as-a-Judge for Image Safety without Human Labeling },
  author={ Zhenting Wang and Shuming Hu and Shiyu Zhao and Xiaowen Lin and Felix Juefei-Xu and Zhuowei Li and Ligong Han and Harihar Subramanyam and Li Chen and Jianfa Chen and Nan Jiang and Lingjuan Lyu and Shiqing Ma and Dimitris N. Metaxas and Ankit Jain },
  journal={arXiv preprint arXiv:2501.00192},
  year={ 2025 }
}
Comments on this paper