ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.12433
17
0

Exploring Cultural Variations in Moral Judgments with Large Language Models

14 June 2025
Hadi Mohammadi
Efthymia Papadopoulou
Yasmeen F.S.S. Meijer
Ayoub Bagheri
ArXiv (abs)PDFHTML
Main:8 Pages
10 Figures
Bibliography:3 Pages
4 Tables
Appendix:3 Pages
Abstract

Large Language Models (LLMs) have shown strong performance across many tasks, but their ability to capture culturally diverse moral values remains unclear. In this paper, we examine whether LLMs can mirror variations in moral attitudes reported by two major cross-cultural surveys: the World Values Survey and the PEW Research Center's Global Attitudes Survey. We compare smaller, monolingual, and multilingual models (GPT-2, OPT, BLOOMZ, and Qwen) with more recent instruction-tuned models (GPT-4o, GPT-4o-mini, Gemma-2-9b-it, and Llama-3.3-70B-Instruct). Using log-probability-based moral justifiability scores, we correlate each model's outputs with survey data covering a broad set of ethical topics. Our results show that many earlier or smaller models often produce near-zero or negative correlations with human judgments. In contrast, advanced instruction-tuned models (including GPT-4o and GPT-4o-mini) achieve substantially higher positive correlations, suggesting they better reflect real-world moral attitudes. While scaling up model size and using instruction tuning can improve alignment with cross-cultural moral norms, challenges remain for certain topics and regions. We discuss these findings in relation to bias analysis, training data diversity, and strategies for improving the cultural sensitivity of LLMs.

View on arXiv
@article{mohammadi2025_2506.12433,
  title={ Exploring Cultural Variations in Moral Judgments with Large Language Models },
  author={ Hadi Mohammadi and Efthymia Papadopoulou and Yasmeen F.S.S. Meijer and Ayoub Bagheri },
  journal={arXiv preprint arXiv:2506.12433},
  year={ 2025 }
}
Comments on this paper