23

FREAK: A Fine-grained Hallucination Evaluation Benchmark for Advanced MLLMs

Zhihan Yin
Jianxin Liang
Yueqian Wang
Yifeng Yao
Huishuai Zhang
Dongyan Zhao
Main:9 Pages
27 Figures
Bibliography:4 Pages
12 Tables
Appendix:21 Pages
Abstract

Multimodal Large Language Models (MLLMs) suffer from hallucinations. Existing hallucination evaluation benchmarks are often limited by over-simplified tasks leading to saturated metrics, or insufficient diversity that fails to adequately assess the hallucination extent in state-of-the-art multimodal models. To address this gap, we propose FREAK, a comprehensive multimodal benchmark designed for fine-grained hallucination assessment in MLLMs. Through high-quality photorealistic images featuring fine-grained counter-commonsense edits, FREAK innovatively evaluates hallucination phenomena in detailed visual perception of MLLMs. Extensive experiments on FREAK show severe hallucination issues in SOTA models regarding detailed visual perception. To enable deeper investigation, we curate a controlled subset to indirectly evaluate the model's ability to perceive target detailed information. Through systematic evaluation of prevailing Chain-of-Thought (CoT) prompting techniques within this task, we reveal critical insights regarding hallucination patterns and model reasoning processes.

View on arXiv
Comments on this paper