44

"I See What You Did There": Can Large Vision-Language Models Understand Multimodal Puns?

Naen Xu
Jiayi Sheng
Changjiang Li
Chunyi Zhou
Yuyuan Li
Tianyu Du
Jun Wang
Zhihui Fu
Jinbao Li
Shouling Ji
Main:9 Pages
6 Figures
Bibliography:3 Pages
12 Tables
Appendix:7 Pages
Abstract

Puns are a common form of rhetorical wordplay that exploits polysemy and phonetic similarity to create humor. In multimodal puns, visual and textual elements synergize to ground the literal sense and evoke the figurative meaning simultaneously. Although Vision-Language Models (VLMs) are widely used in multimodal understanding and generation, their ability to understand puns has not been systematically studied due to a scarcity of rigorous benchmarks. To address this, we first propose a multimodal pun generation pipeline. We then introduce MultiPun, a dataset comprising diverse types of puns alongside adversarial non-pun distractors. Our evaluation reveals that most models struggle to distinguish genuine puns from these distractors. Moreover, we propose both prompt-level and model-level strategies to enhance pun comprehension, with an average improvement of 16.5% in F1 scores. Our findings provide valuable insights for developing future VLMs that master the subtleties of human-like humor via cross-modal reasoning.

View on arXiv
Comments on this paper