ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.23045
55
0

Multi-Sourced Compositional Generalization in Visual Question Answering

29 May 2025
Chuanhao Li
Wenbo Ye
Zhen Li
Yuwei Wu
Yunde Jia
    CoGe
ArXiv (abs)PDFHTML
Main:7 Pages
6 Figures
Bibliography:2 Pages
4 Tables
Abstract

Compositional generalization is the ability of generalizing novel compositions from seen primitives, and has received much attention in vision-and-language (V\&L) recently. Due to the multi-modal nature of V\&L tasks, the primitives composing compositions source from different modalities, resulting in multi-sourced novel compositions. However, the generalization ability over multi-sourced novel compositions, \textit{i.e.}, multi-sourced compositional generalization (MSCG) remains unexplored. In this paper, we explore MSCG in the context of visual question answering (VQA), and propose a retrieval-augmented training framework to enhance the MSCG ability of VQA models by learning unified representations for primitives from different modalities. Specifically, semantically equivalent primitives are retrieved for each primitive in the training samples, and the retrieved features are aggregated with the original primitive to refine the model. This process helps the model learn consistent representations for the same semantic primitives across different modalities. To evaluate the MSCG ability of VQA models, we construct a new GQA-MSCG dataset based on the GQA dataset, in which samples include three types of novel compositions composed of primitives from different modalities. Experimental results demonstrate the effectiveness of the proposed framework. We release GQA-MSCG atthis https URL.

View on arXiv
@article{li2025_2505.23045,
  title={ Multi-Sourced Compositional Generalization in Visual Question Answering },
  author={ Chuanhao Li and Wenbo Ye and Zhen Li and Yuwei Wu and Yunde Jia },
  journal={arXiv preprint arXiv:2505.23045},
  year={ 2025 }
}
Comments on this paper