ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08815
50
0

Cross-Examiner: Evaluating Consistency of Large Language Model-Generated Explanations

11 March 2025
Danielle Villa
Maria Chang
K. Murugesan
Rosario A. Uceda-Sosa
K. Ramamurthy
    LRM
ArXivPDFHTML
Abstract

Large Language Models (LLMs) are often asked to explain their outputs to enhance accuracy and transparency. However, evidence suggests that these explanations can misrepresent the models' true reasoning processes. One effective way to identify inaccuracies or omissions in these explanations is through consistency checking, which typically involves asking follow-up questions. This paper introduces, cross-examiner, a new method for generating follow-up questions based on a model's explanation of an initial question. Our method combines symbolic information extraction with language model-driven question generation, resulting in better follow-up questions than those produced by LLMs alone. Additionally, this approach is more flexible than other methods and can generate a wider variety of follow-up questions.

View on arXiv
@article{villa2025_2503.08815,
  title={ Cross-Examiner: Evaluating Consistency of Large Language Model-Generated Explanations },
  author={ Danielle Villa and Maria Chang and Keerthiram Murugesan and Rosario Uceda-Sosa and Karthikeyan Natesan Ramamurthy },
  journal={arXiv preprint arXiv:2503.08815},
  year={ 2025 }
}
Comments on this paper