ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.07763
  4. Cited By
The Disagreement Problem in Faithfulness Metrics

The Disagreement Problem in Faithfulness Metrics

13 November 2023
Brian Barr
Noah Fatsi
Leif Hancox-Li
Peter Richter
Daniel Proano
Caleb Mok
ArXivPDFHTML

Papers citing "The Disagreement Problem in Faithfulness Metrics"

13 / 13 papers shown
Title
BASED-XAI: Breaking Ablation Studies Down for Explainable Artificial
  Intelligence
BASED-XAI: Breaking Ablation Studies Down for Explainable Artificial Intelligence
Isha Hameed
Samuel Sharpe
Daniel Barcklow
Justin Au-yeung
Sahil Verma
Jocelyn Huang
Brian Barr
C. Bayan Bruss
60
15
0
12 Jul 2022
OpenXAI: Towards a Transparent Evaluation of Model Explanations
OpenXAI: Towards a Transparent Evaluation of Model Explanations
Chirag Agarwal
Dan Ley
Satyapriya Krishna
Eshika Saxena
Martin Pawelczyk
Nari Johnson
Isha Puri
Marinka Zitnik
Himabindu Lakkaraju
XAI
55
145
0
22 Jun 2022
Scaling Laws and Interpretability of Learning from Repeated Data
Scaling Laws and Interpretability of Learning from Repeated Data
Danny Hernandez
Tom B. Brown
Tom Conerly
Nova Dassarma
Dawn Drain
...
Catherine Olsson
Dario Amodei
Nicholas Joseph
Jared Kaplan
Sam McCandlish
58
114
0
21 May 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural
  Network Explanations and Beyond
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
42
175
0
14 Feb 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
241
193
0
03 Feb 2022
Explaining by Removing: A Unified Framework for Model Explanation
Explaining by Removing: A Unified Framework for Model Explanation
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
87
248
0
21 Nov 2020
Captum: A unified and generic model interpretability library for PyTorch
Captum: A unified and generic model interpretability library for PyTorch
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
...
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
FAtt
130
840
0
16 Sep 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on
  Complementary Team Performance
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
70
595
0
26 Jun 2020
Unrestricted Permutation forces Extrapolation: Variable Importance
  Requires at least One More Model, or There Is No Free Variable Importance
Unrestricted Permutation forces Extrapolation: Variable Importance Requires at least One More Model, or There Is No Free Variable Importance
Giles Hooker
L. Mentch
Siyu Zhou
66
158
0
01 May 2019
Towards Robust Interpretability with Self-Explaining Neural Networks
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
124
940
0
20 Jun 2018
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,864
0
22 May 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
177
5,986
0
04 Mar 2017
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
16,954
0
16 Feb 2016
1