ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.10839
33
0

Rethinking Theory of Mind Benchmarks for LLMs: Towards A User-Centered Perspective

15 April 2025
Qiaosi Wang
Xuhui Zhou
Maarten Sap
Jodi Forlizzi
Hong Shen
ArXivPDFHTML
Abstract

The last couple of years have witnessed emerging research that appropriates Theory-of-Mind (ToM) tasks designed for humans to benchmark LLM's ToM capabilities as an indication of LLM's social intelligence. However, this approach has a number of limitations. Drawing on existing psychology and AI literature, we summarize the theoretical, methodological, and evaluation limitations by pointing out that certain issues are inherently present in the original ToM tasks used to evaluate human's ToM, which continues to persist and exacerbated when appropriated to benchmark LLM's ToM. Taking a human-computer interaction (HCI) perspective, these limitations prompt us to rethink the definition and criteria of ToM in ToM benchmarks in a more dynamic, interactional approach that accounts for user preferences, needs, and experiences with LLMs in such evaluations. We conclude by outlining potential opportunities and challenges towards this direction.

View on arXiv
@article{wang2025_2504.10839,
  title={ Rethinking Theory of Mind Benchmarks for LLMs: Towards A User-Centered Perspective },
  author={ Qiaosi Wang and Xuhui Zhou and Maarten Sap and Jodi Forlizzi and Hong Shen },
  journal={arXiv preprint arXiv:2504.10839},
  year={ 2025 }
}
Comments on this paper