ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.06470
49
0

A Survey of Theory of Mind in Large Language Models: Evaluations, Representations, and Safety Risks

10 February 2025
Hieu Minh "Jord" Nguyen
    LM&MA
    LRM
ArXivPDFHTML
Abstract

Theory of Mind (ToM), the ability to attribute mental states to others and predict their behaviour, is fundamental to social intelligence. In this paper, we survey studies evaluating behavioural and representational ToM in Large Language Models (LLMs), identify important safety risks from advanced LLM ToM capabilities, and suggest several research directions for effective evaluation and mitigation of these risks.

View on arXiv
@article{nguyen2025_2502.06470,
  title={ A Survey of Theory of Mind in Large Language Models: Evaluations, Representations, and Safety Risks },
  author={ Hieu Minh "Jord" Nguyen },
  journal={arXiv preprint arXiv:2502.06470},
  year={ 2025 }
}
Comments on this paper