ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.15972
37
0

Multi-Agent Multimodal Models for Multicultural Text to Image Generation

21 February 2025
Parth Bhalerao
Mounika Yalamarty
Brian Trinh
Oana Ignat
ArXivPDFHTML
Abstract

Large Language Models (LLMs) demonstrate impressive performance across various multimodal tasks. However, their effectiveness in cross-cultural contexts remains limited due to the predominantly Western-centric nature of existing data and models. Meanwhile, multi-agent models have shown strong capabilities in solving complex tasks. In this paper, we evaluate the performance of LLMs in a multi-agent interaction setting for the novel task of multicultural image generation. Our key contributions are: (1) We introduce MosAIG, a Multi-Agent framework that enhances multicultural Image Generation by leveraging LLMs with distinct cultural personas; (2) We provide a dataset of 9,000 multicultural images spanning five countries, three age groups, two genders, 25 historical landmarks, and five languages; and (3) We demonstrate that multi-agent interactions outperform simple, no-agent models across multiple evaluation metrics, offering valuable insights for future research. Our dataset and models are available atthis https URL.

View on arXiv
@article{bhalerao2025_2502.15972,
  title={ Multi-Agent Multimodal Models for Multicultural Text to Image Generation },
  author={ Parth Bhalerao and Mounika Yalamarty and Brian Trinh and Oana Ignat },
  journal={arXiv preprint arXiv:2502.15972},
  year={ 2025 }
}
Comments on this paper