ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.12022
53
1

Understanding Epistemic Language with a Language-augmented Bayesian Theory of Mind

21 August 2024
Lance Ying
Tan Zhi-Xuan
Lionel Wong
Vikash K. Mansinghka
J. Tenenbaum
ArXivPDFHTML
Abstract

How do people understand and evaluate claims about others' beliefs, even though these beliefs cannot be directly observed? In this paper, we introduce a cognitive model of epistemic language interpretation, grounded in Bayesian inferences about other agents' goals, beliefs, and intentions: a language-augmented Bayesian theory-of-mind (LaBToM). By translating natural language into an epistemic ``language-of-thought'' with grammar-constrained LLM decoding, then evaluating these translations against the inferences produced by inverting a generative model of rational action and perception, LaBToM captures graded plausibility judgments of epistemic claims. We validate our model in an experiment where participants watch an agent navigate a maze to find keys hidden in boxes needed to reach their goal, then rate sentences about the agent's beliefs. In contrast with multimodal LLMs (GPT-4o, Gemini Pro) and ablated models, our model correlates highly with human judgments for a wide range of expressions, including modal language, uncertainty expressions, knowledge claims, likelihood comparisons, and attributions of false belief.

View on arXiv
@article{ying2025_2408.12022,
  title={ Understanding Epistemic Language with a Language-augmented Bayesian Theory of Mind },
  author={ Lance Ying and Tan Zhi-Xuan and Lionel Wong and Vikash Mansinghka and Joshua B. Tenenbaum },
  journal={arXiv preprint arXiv:2408.12022},
  year={ 2025 }
}
Comments on this paper