ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19299
79
0

A Necessary Step toward Faithfulness: Measuring and Improving Consistency in Free-Text Explanations

25 May 2025
Lingjun Zhao
Hal Daumé III
ArXivPDFHTML
Abstract

Faithful free-text explanations are important to ensure transparency in high-stakes AI decision-making contexts, but they are challenging to generate by language models and assess by humans. In this paper, we present a measure for Prediction-EXplanation (PEX) consistency, by extending the concept of weight of evidence. This measure quantifies how much a free-text explanation supports or opposes a prediction, serving as an important aspect of explanation faithfulness. Our analysis reveals that more than 62% explanations generated by large language models lack this consistency. We show that applying direct preference optimization improves the consistency of generated explanations across three model families, with improvement ranging from 43.1% to 292.3%. Furthermore, we demonstrate that optimizing this consistency measure can improve explanation faithfulness by up to 9.7%.

View on arXiv
@article{zhao2025_2505.19299,
  title={ A Necessary Step toward Faithfulness: Measuring and Improving Consistency in Free-Text Explanations },
  author={ Lingjun Zhao and Hal Daumé III },
  journal={arXiv preprint arXiv:2505.19299},
  year={ 2025 }
}
Comments on this paper