ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.09109
38
0

CAIRe: Cultural Attribution of Images by Retrieval-Augmented Evaluation

10 June 2025
Arnav Yayavaram
Siddharth Yayavaram
Simran Khanuja
Michael Saxon
Graham Neubig
ArXiv (abs)PDFHTML
Main:9 Pages
8 Figures
Bibliography:3 Pages
15 Tables
Appendix:6 Pages
Abstract

As text-to-image models become increasingly prevalent, ensuring their equitable performance across diverse cultural contexts is critical. Efforts to mitigate cross-cultural biases have been hampered by trade-offs, including a loss in performance, factual inaccuracies, or offensive outputs. Despite widespread recognition of these challenges, an inability to reliably measure these biases has stalled progress. To address this gap, we introduce CAIRe, a novel evaluation metric that assesses the degree of cultural relevance of an image, given a user-defined set of labels. Our framework grounds entities and concepts in the image to a knowledge base and uses factual information to give independent graded judgments for each culture label. On a manually curated dataset of culturally salient but rare items built using language models, CAIRe surpasses all baselines by 28% F1 points. Additionally, we construct two datasets for culturally universal concept, one comprising of T2I-generated outputs and another retrieved from naturally occurring data. CAIRe achieves Pearson's correlations of 0.56 and 0.66 with human ratings on these sets, based on a 5-point Likert scale of cultural relevance. This demonstrates its strong alignment with human judgment across diverse image sources.

View on arXiv
@article{yayavaram2025_2506.09109,
  title={ CAIRe: Cultural Attribution of Images by Retrieval-Augmented Evaluation },
  author={ Arnav Yayavaram and Siddharth Yayavaram and Simran Khanuja and Michael Saxon and Graham Neubig },
  journal={arXiv preprint arXiv:2506.09109},
  year={ 2025 }
}
Comments on this paper