ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.11361
20
0

The Biased Samaritan: LLM biases in Perceived Kindness

12 June 2025
Jack H Fagan
Ruhaan Juyaal
Amy Yue-Ming Yu
Siya Pun
ArXiv (abs)PDFHTML
Main:6 Pages
4 Figures
Bibliography:1 Pages
15 Tables
Appendix:4 Pages
Abstract

While Large Language Models (LLMs) have become ubiquitous in many fields, understanding and mitigating LLM biases is an ongoing issue. This paper provides a novel method for evaluating the demographic biases of various generative AI models. By prompting models to assess a moral patient's willingness to intervene constructively, we aim to quantitatively evaluate different LLMs' biases towards various genders, races, and ages. Our work differs from existing work by aiming to determine the baseline demographic identities for various commercial models and the relationship between the baseline and other demographics. We strive to understand if these biases are positive, neutral, or negative, and the strength of these biases. This paper can contribute to the objective assessment of bias in Large Language Models and give the user or developer the power to account for these biases in LLM output or in training future LLMs. Our analysis suggested two key findings: that models view the baseline demographic as a white middle-aged or young adult male; however, a general trend across models suggested that non-baseline demographics are more willing to help than the baseline. These methodologies allowed us to distinguish these two biases that are often tangled together.

View on arXiv
@article{fagan2025_2506.11361,
  title={ The Biased Samaritan: LLM biases in Perceived Kindness },
  author={ Jack H Fagan and Ruhaan Juyaal and Amy Yue-Ming Yu and Siya Pun },
  journal={arXiv preprint arXiv:2506.11361},
  year={ 2025 }
}
Comments on this paper