58

KVSmooth: Mitigating Hallucination in Multi-modal Large Language Models through Key-Value Smoothing

Siyu Jiang
Feiyang Chen
Xiaojin Zhang
Kun He
Main:8 Pages
10 Figures
Bibliography:3 Pages
5 Tables
Appendix:7 Pages
Abstract

Despite the significant progress of Multimodal Large Language Models (MLLMs) across diverse tasks, hallucination -- corresponding to the generation of visually inconsistent objects, attributes, or relations -- remains a major obstacle to their reliable deployment. Unlike pure language models, MLLMs must ground their generation process in visual inputs. However, existing models often suffer from semantic drift during decoding, causing outputs to diverge from visual facts as the sequence length increases.To address this issue, we propose KVSmooth, a training-free and plug-and-play method that mitigates hallucination by performing attention-entropy-guided adaptive smoothing on hidden states. Specifically, KVSmooth applies an exponential moving average (EMA) to both keys and values in the KV-Cache, while dynamically quantifying the sink degree of each token through the entropy of its attention distribution to adaptively adjust the smoothing strength.Unlike computationally expensive retraining or contrastive decoding methods, KVSmooth operates efficiently during inference without additional training or model modification. Extensive experiments demonstrate that KVSmooth significantly reduces hallucination (CHAIRS\mathit{CHAIR}_{S} from 41.818.241.8 \rightarrow 18.2) while improving overall performance (F1F_1 score from 77.579.277.5 \rightarrow 79.2), achieving higher precision and recall simultaneously. In contrast, prior methods often improve one at the expense of the other, validating the effectiveness and generality of our approach.

View on arXiv
Comments on this paper