ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.14462
15
1

RAVENEA: A Benchmark for Multimodal Retrieval-Augmented Visual Culture Understanding

20 May 2025
Jiaang Li
Yifei Yuan
Wenyan Li
Mohammad Aliannejadi
Daniel Hershcovich
Anders Søgaard
Ivan Vulić
Wenxuan Zhang
Paul Pu Liang
Yang Deng
Serge Belongie
    VLM
ArXivPDFHTML
Abstract

As vision-language models (VLMs) become increasingly integrated into daily life, the need for accurate visual culture understanding is becoming critical. Yet, these models frequently fall short in interpreting cultural nuances effectively. Prior work has demonstrated the effectiveness of retrieval-augmented generation (RAG) in enhancing cultural understanding in text-only settings, while its application in multimodal scenarios remains underexplored. To bridge this gap, we introduce RAVENEA (Retrieval-Augmented Visual culturE uNdErstAnding), a new benchmark designed to advance visual culture understanding through retrieval, focusing on two tasks: culture-focused visual question answering (cVQA) and culture-informed image captioning (cIC). RAVENEA extends existing datasets by integrating over 10,000 Wikipedia documents curated and ranked by human annotators. With RAVENEA, we train and evaluate seven multimodal retrievers for each image query, and measure the downstream impact of retrieval-augmented inputs across fourteen state-of-the-art VLMs. Our results show that lightweight VLMs, when augmented with culture-aware retrieval, outperform their non-augmented counterparts (by at least 3.2% absolute on cVQA and 6.2% absolute on cIC). This highlights the value of retrieval-augmented methods and culturally inclusive benchmarks for multimodal understanding.

View on arXiv
@article{li2025_2505.14462,
  title={ RAVENEA: A Benchmark for Multimodal Retrieval-Augmented Visual Culture Understanding },
  author={ Jiaang Li and Yifei Yuan and Wenyan Li and Mohammad Aliannejadi and Daniel Hershcovich and Anders Søgaard and Ivan Vulić and Wenxuan Zhang and Paul Pu Liang and Yang Deng and Serge Belongie },
  journal={arXiv preprint arXiv:2505.14462},
  year={ 2025 }
}
Comments on this paper