15
0

MM-Prompt: Cross-Modal Prompt Tuning for Continual Visual Question Answering

Main:9 Pages
13 Figures
Bibliography:3 Pages
15 Tables
Appendix:6 Pages
Abstract

Continual Visual Question Answering (CVQA) based on pre-trained models(PTMs) has achieved promising progress by leveraging prompt tuning to enable continual multi-modal learning. However, most existing methods adopt cross-modal prompt isolation, constructing visual and textual prompts separately, which exacerbates modality imbalance and leads to degraded performance over time. To tackle this issue, we propose MM-Prompt, a novel framework incorporating cross-modal prompt query and cross-modal prompt recovery. The former enables balanced prompt selection by incorporating cross-modal signals during query formation, while the latter promotes joint prompt reconstruction through iterative cross-modal interactions, guided by an alignment loss to prevent representational drift. Extensive experiments show that MM-Prompt surpasses prior approaches in accuracy and knowledge retention, while maintaining balanced modality engagement throughout continual learning.

View on arXiv
@article{li2025_2505.19455,
  title={ MM-Prompt: Cross-Modal Prompt Tuning for Continual Visual Question Answering },
  author={ Xu Li and Fan Lyu },
  journal={arXiv preprint arXiv:2505.19455},
  year={ 2025 }
}
Comments on this paper