ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.13503
11
20

Weight of Evidence as a Basis for Human-Oriented Explanations

29 October 2019
David Alvarez-Melis
Hal Daumé
Jennifer Wortman Vaughan
Hanna M. Wallach
    XAI
    FAtt
ArXivPDFHTML
Abstract

Interpretability is an elusive but highly sought-after characteristic of modern machine learning methods. Recent work has focused on interpretability via explanations\textit{explanations}explanations, which justify individual model predictions. In this work, we take a step towards reconciling machine explanations with those that humans produce and prefer by taking inspiration from the study of explanation in philosophy, cognitive science, and the social sciences. We identify key aspects in which these human explanations differ from current machine explanations, distill them into a list of desiderata, and formalize them into a framework via the notion of weight of evidence\textit{weight of evidence}weight of evidence from information theory. Finally, we instantiate this framework in two simple applications and show it produces intuitive and comprehensible explanations.

View on arXiv
Comments on this paper