ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.21098
40
0

Re-evaluating Theory of Mind evaluation in large language models

28 February 2025
Jennifer Hu
Felix Sosa
T. Ullman
ArXivPDFHTML
Abstract

The question of whether large language models (LLMs) possess Theory of Mind (ToM) -- often defined as the ability to reason about others' mental states -- has sparked significant scientific and public interest. However, the evidence as to whether LLMs possess ToM is mixed, and the recent growth in evaluations has not resulted in a convergence. Here, we take inspiration from cognitive science to re-evaluate the state of ToM evaluation in LLMs. We argue that a major reason for the disagreement on whether LLMs have ToM is a lack of clarity on whether models should be expected to match human behaviors, or the computations underlying those behaviors. We also highlight ways in which current evaluations may be deviating from "pure" measurements of ToM abilities, which also contributes to the confusion. We conclude by discussing several directions for future research, including the relationship between ToM and pragmatic communication, which could advance our understanding of artificial systems as well as human cognition.

View on arXiv
@article{hu2025_2502.21098,
  title={ Re-evaluating Theory of Mind evaluation in large language models },
  author={ Jennifer Hu and Felix Sosa and Tomer Ullman },
  journal={arXiv preprint arXiv:2502.21098},
  year={ 2025 }
}
Comments on this paper