ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.08260
33
0

Evaluating the Bias in LLMs for Surveying Opinion and Decision Making in Healthcare

11 April 2025
Yonchanok Khaokaew
Flora D. Salim
Andreas Züfle
Hao Xue
Taylor Anderson
C. Raina MacIntyre
Matthew Scotch
David J Heslop
    AI4CE
ArXivPDFHTML
Abstract

Generative agents have been increasingly used to simulate human behaviour in silico, driven by large language models (LLMs). These simulacra serve as sandboxes for studying human behaviour without compromising privacy or safety. However, it remains unclear whether such agents can truly represent real individuals. This work compares survey data from the Understanding America Study (UAS) on healthcare decision-making with simulated responses from generative agents. Using demographic-based prompt engineering, we create digital twins of survey respondents and analyse how well different LLMs reproduce real-world behaviours. Our findings show that some LLMs fail to reflect realistic decision-making, such as predicting universal vaccine acceptance. However, Llama 3 captures variations across race and Income more accurately but also introduces biases not present in the UAS data. This study highlights the potential of generative agents for behavioural research while underscoring the risks of bias from both LLMs and prompting strategies.

View on arXiv
@article{khaokaew2025_2504.08260,
  title={ Evaluating the Bias in LLMs for Surveying Opinion and Decision Making in Healthcare },
  author={ Yonchanok Khaokaew and Flora D. Salim and Andreas Züfle and Hao Xue and Taylor Anderson and C. Raina MacIntyre and Matthew Scotch and David J Heslop },
  journal={arXiv preprint arXiv:2504.08260},
  year={ 2025 }
}
Comments on this paper