109
0

Value Portrait: Assessing Language Models' Values through Psychometrically and Ecologically Valid Items

Main:8 Pages
7 Figures
Bibliography:4 Pages
12 Tables
Appendix:20 Pages
Abstract

The importance of benchmarks for assessing the values of language models has been pronounced due to the growing need of more authentic, human-aligned responses. However, existing benchmarks rely on human or machine annotations that are vulnerable to value-related biases. Furthermore, the tested scenarios often diverge from real-world contexts in which models are commonly used to generate text and express values. To address these issues, we propose the Value Portrait benchmark, a reliable framework for evaluating LLMs' value orientations with two key characteristics. First, the benchmark consists of items that capture real-life user-LLM interactions, enhancing the relevance of assessment results to real-world LLM usage. Second, each item is rated by human subjects based on its similarity to their own thoughts, and correlations between these ratings and the subjects' actual value scores are derived. This psychometrically validated approach ensures that items strongly correlated with specific values serve as reliable items for assessing those values. Through evaluating 44 LLMs with our benchmark, we find that these models prioritize Benevolence, Security, and Self-Direction values while placing less emphasis on Tradition, Power, and Achievement values. Also, our analysis reveals biases in how LLMs perceive various demographic groups, deviating from real human data.

View on arXiv
@article{han2025_2505.01015,
  title={ Value Portrait: Assessing Language Models' Values through Psychometrically and Ecologically Valid Items },
  author={ Jongwook Han and Dongmin Choi and Woojung Song and Eun-Ju Lee and Yohan Jo },
  journal={arXiv preprint arXiv:2505.01015},
  year={ 2025 }
}
Comments on this paper