ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.14997
5
0

Hypothesis Testing for Quantifying LLM-Human Misalignment in Multiple Choice Settings

17 June 2025
Harbin Hong
Sebastian Caldas
Liu Leqi
ArXiv (abs)PDFHTML
Main:6 Pages
8 Figures
Bibliography:1 Pages
Appendix:4 Pages
Abstract

As Large Language Models (LLMs) increasingly appear in social science research (e.g., economics and marketing), it becomes crucial to assess how well these models replicate human behavior. In this work, using hypothesis testing, we present a quantitative framework to assess the misalignment between LLM-simulated and actual human behaviors in multiple-choice survey settings. This framework allows us to determine in a principled way whether a specific language model can effectively simulate human opinions, decision-making, and general behaviors represented through multiple-choice options. We applied this framework to a popular language model for simulating people's opinions in various public surveys and found that this model is ill-suited for simulating the tested sub-populations (e.g., across different races, ages, and incomes) for contentious questions. This raises questions about the alignment of this language model with the tested populations, highlighting the need for new practices in using LLMs for social science studies beyond naive simulations of human subjects.

View on arXiv
@article{hong2025_2506.14997,
  title={ Hypothesis Testing for Quantifying LLM-Human Misalignment in Multiple Choice Settings },
  author={ Harbin Hong and Sebastian Caldas and Liu Leqi },
  journal={arXiv preprint arXiv:2506.14997},
  year={ 2025 }
}
Comments on this paper