41

Dynamic Multimodal Activation Steering for Hallucination Mitigation in Large Vision-Language Models

Jianghao Yin
Qin Chen
Kedi Chen
Jie Zhou
Xingjiao Wu
Liang He
Main:9 Pages
7 Figures
Bibliography:4 Pages
15 Tables
Appendix:5 Pages
Abstract

Large Vision-Language Models (LVLMs) exhibit outstanding performance on vision-language tasks but struggle with hallucination problems. Through in-depth analysis of LVLM activation patterns, we reveal two key findings: 1) truthfulness and visual perception capabilities predominantly engage different subsets of attention heads within the model architecture; and 2) truthfulness steering vectors vary significantly across different semantic contexts. Based on these observations, we propose Dynamic Multimodal Activation Steering, a training-free approach for hallucination mitigation. Our method constructs a semantic-based truthfulness steering vector database and computes visual perception steering vectors, enabling context-aware interventions during inference by dynamically selecting the most relevant steering vectors based on input semantic similarity and applying them to the most influential attention heads. We conduct comprehensive experiments across multiple models and datasets, demonstrating that our approach significantly enhances model performance, outperforming existing state-of-the-art methods.

View on arXiv
Comments on this paper