44
2

CAMA: Enhancing Multimodal In-Context Learning with Context-Aware Modulated Attention

Main:3 Pages
3 Figures
Bibliography:4 Pages
6 Tables
Appendix:3 Pages
Abstract

Multimodal in-context learning (ICL) enables large vision-language models (LVLMs) to efficiently adapt to novel tasks, supporting a wide array of real-world applications. However, multimodal ICL remains unstable, and current research largely focuses on optimizing sequence configuration while overlooking the internal mechanisms of LVLMs. In this work, we first provide a theoretical analysis of attentional dynamics in multimodal ICL and identify three core limitations of standard attention that ICL impair performance. To address these challenges, we propose Context-Aware Modulated Attention (CAMA), a simple yet effective plug-and-play method for directly calibrating LVLM attention logits. CAMA is training-free and can be seamlessly applied to various open-source LVLMs. We evaluate CAMA on four LVLMs across six benchmarks, demonstrating its effectiveness and generality. CAMA opens new opportunities for deeper exploration and targeted utilization of LVLM attention dynamics to advance multimodal reasoning.

View on arXiv
@article{li2025_2505.17097,
  title={ CAMA: Enhancing Multimodal In-Context Learning with Context-Aware Modulated Attention },
  author={ Yanshu Li and JianJiang Yang and Bozheng Li and Ruixiang Tang },
  journal={arXiv preprint arXiv:2505.17097},
  year={ 2025 }
}
Comments on this paper