ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.09392
  4. Cited By
On the (In)fidelity and Sensitivity for Explanations

On the (In)fidelity and Sensitivity for Explanations

27 January 2019
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
    FAtt
ArXivPDFHTML

Papers citing "On the (In)fidelity and Sensitivity for Explanations"

34 / 84 papers shown
Title
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural
  Network Explanations and Beyond
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
21
168
0
14 Feb 2022
Time to Focus: A Comprehensive Benchmark Using Time Series Attribution
  Methods
Time to Focus: A Comprehensive Benchmark Using Time Series Attribution Methods
Dominique Mercier
Jwalin Bhatt
Andreas Dengel
Sheraz Ahmed
AI4TS
22
11
0
08 Feb 2022
Towards a consistent interpretation of AIOps models
Towards a consistent interpretation of AIOps models
Yingzhe Lyu
Gopi Krishnan Rajbahadur
Dayi Lin
Boyuan Chen
Zhen Ming
Z. Jiang
AI4CE
22
19
0
04 Feb 2022
Topological Representations of Local Explanations
Topological Representations of Local Explanations
Peter Xenopoulos
G. Chan
Harish Doraiswamy
L. G. Nonato
Brian Barr
Claudio Silva
FAtt
25
4
0
06 Jan 2022
Explainable Deep Learning in Healthcare: A Methodological Survey from an
  Attribution View
Explainable Deep Learning in Healthcare: A Methodological Survey from an Attribution View
Di Jin
Elena Sergeeva
W. Weng
Geeticka Chauhan
Peter Szolovits
OOD
31
55
0
05 Dec 2021
Defense Against Explanation Manipulation
Defense Against Explanation Manipulation
Ruixiang Tang
Ninghao Liu
Fan Yang
Na Zou
Xia Hu
AAML
44
11
0
08 Nov 2021
Coalitional Bayesian Autoencoders -- Towards explainable unsupervised
  deep learning
Coalitional Bayesian Autoencoders -- Towards explainable unsupervised deep learning
Bang Xiang Yong
Alexandra Brintrup
21
6
0
19 Oct 2021
TorchEsegeta: Framework for Interpretability and Explainability of
  Image-based Deep Learning Models
TorchEsegeta: Framework for Interpretability and Explainability of Image-based Deep Learning Models
S. Chatterjee
Arnab Das
Chirag Mandal
Budhaditya Mukhopadhyay
Manish Vipinraj
Aniruddh Shukla
R. Rao
Chompunuch Sarasaen
Oliver Speck
A. Nürnberger
MedIm
37
14
0
16 Oct 2021
Diagnostics-Guided Explanation Generation
Diagnostics-Guided Explanation Generation
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
LRM
FAtt
38
6
0
08 Sep 2021
A Survey on Automated Fact-Checking
A Survey on Automated Fact-Checking
Zhijiang Guo
M. Schlichtkrull
Andreas Vlachos
27
457
0
26 Aug 2021
Semantic Concentration for Domain Adaptation
Semantic Concentration for Domain Adaptation
Shuang Li
Mixue Xie
Fangrui Lv
Chi Harold Liu
Jian Liang
C. Qin
Wei Li
52
87
0
12 Aug 2021
Quantifying Explainability in NLP and Analyzing Algorithms for
  Performance-Explainability Tradeoff
Quantifying Explainability in NLP and Analyzing Algorithms for Performance-Explainability Tradeoff
Michael J. Naylor
C. French
Samantha R. Terker
Uday Kamath
36
10
0
12 Jul 2021
What will it take to generate fairness-preserving explanations?
What will it take to generate fairness-preserving explanations?
Jessica Dai
Sohini Upadhyay
Stephen H. Bach
Himabindu Lakkaraju
FAtt
FaML
13
14
0
24 Jun 2021
On Locality of Local Explanation Models
On Locality of Local Explanation Models
Sahra Ghalebikesabi
Lucile Ter-Minassian
Karla Diaz-Ordaz
Chris Holmes
FedML
FAtt
26
39
0
24 Jun 2021
Synthetic Benchmarks for Scientific Research in Explainable Machine
  Learning
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
Yang Liu
Sujay Khandagale
Colin White
W. Neiswanger
37
65
0
23 Jun 2021
On the Sensitivity and Stability of Model Interpretations in NLP
On the Sensitivity and Stability of Model Interpretations in NLP
Fan Yin
Zhouxing Shi
Cho-Jui Hsieh
Kai-Wei Chang
FAtt
16
33
0
18 Apr 2021
Shapley Explanation Networks
Shapley Explanation Networks
Rui Wang
Xiaoqian Wang
David I. Inouye
TDI
FAtt
21
44
0
06 Apr 2021
Evaluating explainable artificial intelligence methods for multi-label
  deep learning classification tasks in remote sensing
Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing
Ioannis Kakogeorgiou
Konstantinos Karantzalos
XAI
23
118
0
03 Apr 2021
Robust Models Are More Interpretable Because Attributions Look Normal
Robust Models Are More Interpretable Because Attributions Look Normal
Zifan Wang
Matt Fredrikson
Anupam Datta
OOD
FAtt
35
25
0
20 Mar 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
30
8
0
16 Mar 2021
Do Input Gradients Highlight Discriminative Features?
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
21
57
0
25 Feb 2021
Understanding Failures of Deep Networks via Robust Feature Extraction
Understanding Failures of Deep Networks via Robust Feature Extraction
Sahil Singla
Besmira Nushi
S. Shah
Ece Kamar
Eric Horvitz
FAtt
28
83
0
03 Dec 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI)
  Interpretability through Neural Backdoors
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
29
93
0
22 Sep 2020
Captum: A unified and generic model interpretability library for PyTorch
Captum: A unified and generic model interpretability library for PyTorch
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
...
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
FAtt
29
821
0
16 Sep 2020
A simple defense against adversarial attacks on heatmap explanations
A simple defense against adversarial attacks on heatmap explanations
Laura Rieger
Lars Kai Hansen
FAtt
AAML
27
37
0
13 Jul 2020
Proper Network Interpretability Helps Adversarial Robustness in
  Classification
Proper Network Interpretability Helps Adversarial Robustness in Classification
Akhilan Boopathy
Sijia Liu
Gaoyuan Zhang
Cynthia Liu
Pin-Yu Chen
Shiyu Chang
Luca Daniel
AAML
FAtt
21
66
0
26 Jun 2020
Adversarial Infidelity Learning for Model Interpretation
Adversarial Infidelity Learning for Model Interpretation
Jian Liang
Bing Bai
Yuren Cao
Kun Bai
Fei-Yue Wang
AAML
46
18
0
09 Jun 2020
Evaluating and Aggregating Feature-based Model Explanations
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
33
218
0
01 May 2020
Model Agnostic Multilevel Explanations
Model Agnostic Multilevel Explanations
K. Ramamurthy
B. Vinzamuri
Yunfeng Zhang
Amit Dhurandhar
21
41
0
12 Mar 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep
  Networks
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
30
143
0
10 Feb 2020
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Evaluating Explanation Without Ground Truth in Interpretable Machine
  Learning
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
Fan Yang
Mengnan Du
Xia Hu
XAI
ELM
27
66
0
16 Jul 2019
ML-LOO: Detecting Adversarial Examples with Feature Attribution
ML-LOO: Detecting Adversarial Examples with Feature Attribution
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
AAML
22
101
0
08 Jun 2019
Methods for Interpreting and Understanding Deep Neural Networks
Methods for Interpreting and Understanding Deep Neural Networks
G. Montavon
Wojciech Samek
K. Müller
FaML
234
2,238
0
24 Jun 2017
Previous
12