99
17
v1v2v3 (latest)

Bias in Language Models: Beyond Trick Tests and Toward RUTEd Evaluation

Abstract

Standard benchmarks of bias and fairness in large language models (LLMs) measure the association between the user attributes stated or implied by a prompt and the LLM's short text response, but human-AI interaction increasingly requires long-form and context-specific system output to solve real-world tasks. In the commonly studied domain of gender-occupation bias, we test whether these benchmarks are robust to lengthening the LLM responses as a measure of Realistic Use and Tangible Effects (i.e., RUTEd evaluations). From the current literature, we adapt three standard bias metrics (neutrality, skew, and stereotype) and develop analogous RUTEd evaluations from three contexts of real-world use: children's bedtime stories, user personas, and English language learning exercises. We find that standard bias metrics have no significant correlation with the more realistic bias metrics. For example, selecting the least biased model based on the standard "trick tests" coincides with selecting the least biased model as measured in more realistic use no more than random chance. We suggest that there is not yet evidence to justify standard benchmarks as reliable proxies of real-world AI biases, and we encourage further development of evaluations grounded in particular contexts.

View on arXiv
@article{lum2025_2402.12649,
  title={ Bias in Language Models: Beyond Trick Tests and Toward RUTEd Evaluation },
  author={ Kristian Lum and Jacy Reese Anthis and Kevin Robinson and Chirag Nagpal and Alexander DÁmour },
  journal={arXiv preprint arXiv:2402.12649},
  year={ 2025 }
}
Comments on this paper