ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15712
15
0

TurnaboutLLM: A Deductive Reasoning Benchmark from Detective Games

21 May 2025
Yuan Yuan
Muyu He
Muhammad Adil Shahid
Jiani Huang
Ziyang Li
Li Zhang
    LRM
ArXivPDFHTML
Abstract

This paper introduces TurnaboutLLM, a novel framework and dataset for evaluating the deductive reasoning abilities of Large Language Models (LLMs) by leveraging the interactive gameplay of detective games Ace Attorney and Danganronpa. The framework tasks LLMs with identifying contradictions between testimonies and evidences within long narrative contexts, a challenging task due to the large answer space and diverse reasoning types presented by its questions. We evaluate twelve state-of-the-art LLMs on the dataset, hinting at limitations of popular strategies for enhancing deductive reasoning such as extensive thinking and Chain-of-Thought prompting. The results also suggest varying effects of context size, the number of reasoning step and answer space size on model performance. Overall, TurnaboutLLM presents a substantial challenge for LLMs' deductive reasoning abilities in complex, narrative-rich environments.

View on arXiv
@article{yuan2025_2505.15712,
  title={ TurnaboutLLM: A Deductive Reasoning Benchmark from Detective Games },
  author={ Yuan Yuan and Muyu He and Muhammad Adil Shahid and Jiani Huang and Ziyang Li and Li Zhang },
  journal={arXiv preprint arXiv:2505.15712},
  year={ 2025 }
}
Comments on this paper