ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.19186
57
5

MetaToken: Detecting Hallucination in Image Descriptions by Meta Classification

29 May 2024
Laura Fieback
Jakob Spiegelberg
Hanno Gottschalk
    MLLM
ArXivPDFHTML
Abstract

Large Vision Language Models (LVLMs) have shown remarkable capabilities in multimodal tasks like visual question answering or image captioning. However, inconsistencies between the visual information and the generated text, a phenomenon referred to as hallucinations, remain an unsolved problem with regard to the trustworthiness of LVLMs. To address this problem, recent works proposed to incorporate computationally costly Large (Vision) Language Models in order to detect hallucinations on a sentence- or subsentence-level. In this work, we introduce MetaToken, a lightweight binary classifier to detect hallucinations on the token-level at negligible cost. Based on a statistical analysis, we reveal key factors of hallucinations in LVLMs. MetaToken can be applied to any open-source LVLM without any knowledge about ground truth data providing a calibrated detection of hallucinations. We evaluate our method on four state-of-the-art LVLMs demonstrating the effectiveness of our approach.

View on arXiv
@article{fieback2025_2405.19186,
  title={ MetaToken: Detecting Hallucination in Image Descriptions by Meta Classification },
  author={ Laura Fieback and Jakob Spiegelberg and Hanno Gottschalk },
  journal={arXiv preprint arXiv:2405.19186},
  year={ 2025 }
}
Comments on this paper