Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2211.04780
Cited By
On the Robustness of Explanations of Deep Neural Network Models: A Survey
9 November 2022
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"On the Robustness of Explanations of Deep Neural Network Models: A Survey"
50 / 90 papers shown
Title
A Survey on Physical Adversarial Attack in Computer Vision
Donghua Wang
Wen Yao
Tingsong Jiang
Guijian Tang
Xiaoqian Chen
AAML
113
39
0
28 Sep 2022
Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective
Mark Huasong Meng
Guangdong Bai
Sin Gee Teo
Zhe Hou
Yan Xiao
Yun Lin
Jin Song Dong
AAML
68
44
0
24 Jun 2022
Fooling Explanations in Text Classifiers
Adam Ivankay
Ivan Girardi
Chiara Marchiori
P. Frossard
AAML
66
19
0
07 Jun 2022
How explainable are adversarially-robust CNNs?
Mehdi Nourelahi
Lars Kotthoff
Peijie Chen
Anh Totti Nguyen
AAML
FAtt
59
8
0
25 May 2022
Exploiting the Relationship Between Kendall's Rank Correlation and Cosine Similarity for Attribution Protection
Fan Wang
A. Kong
155
10
0
15 May 2022
Rethinking Stability for Attribution-based Explanations
Chirag Agarwal
Nari Johnson
Martin Pawelczyk
Satyapriya Krishna
Eshika Saxena
Marinka Zitnik
Himabindu Lakkaraju
FAtt
71
53
0
14 Mar 2022
A Survey of Adversarial Defences and Robustness in NLP
Shreyansh Goyal
Sumanth Doddapaneni
Mitesh M.Khapra
B. Ravindran
AAML
77
30
0
12 Mar 2022
Saliency Diversified Deep Ensemble for Robustness to Adversaries
Alexander A. Bogun
Dimche Kostadinov
Damian Borth
AAML
FedML
33
5
0
07 Dec 2021
Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing
Sanchit Sinha
Hanjie Chen
Arshdeep Sekhon
Yangfeng Ji
Yanjun Qi
AAML
FAtt
60
41
0
11 Aug 2021
Advances in adversarial attacks and defenses in computer vision: A survey
Naveed Akhtar
Ajmal Mian
Navid Kardan
M. Shah
AAML
88
240
0
01 Aug 2021
Certifiably Robust Interpretation via Renyi Differential Privacy
Ao Liu
Xiaoyu Chen
Sijia Liu
Lirong Xia
Chuang Gan
AAML
47
14
0
04 Jul 2021
Certification of embedded systems based on Machine Learning: A survey
Guillaume Vidot
Christophe Gabreau
I. Ober
Iulian Ober
39
12
0
14 Jun 2021
Fooling Partial Dependence via Data Poisoning
Hubert Baniecki
Wojciech Kretowicz
P. Biecek
AAML
51
23
0
26 May 2021
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
113
141
0
17 May 2021
Improving Attribution Methods by Learning Submodular Functions
Piyushi Manupriya
Tarun Ram Menta
S. Jagarlapudi
V. Balasubramanian
TDI
56
6
0
19 Apr 2021
Explainable Artificial Intelligence (XAI) on TimeSeries Data: A Survey
Thomas Rojat
Raphael Puget
David Filliat
Javier Del Ser
R. Gelin
Natalia Díaz Rodríguez
XAI
AI4TS
86
135
0
02 Apr 2021
Robust Models Are More Interpretable Because Attributions Look Normal
Zifan Wang
Matt Fredrikson
Anupam Datta
OOD
FAtt
64
26
0
20 Mar 2021
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges
Cynthia Rudin
Chaofan Chen
Zhi Chen
Haiyang Huang
Lesia Semenova
Chudi Zhong
FaML
AI4CE
LRM
218
672
0
20 Mar 2021
Evaluating Robustness of Counterfactual Explanations
André Artelt
Valerie Vaquet
Riza Velioglu
Fabian Hinder
Johannes Brinkrolf
M. Schilling
Barbara Hammer
91
46
0
03 Mar 2021
A Survey On Universal Adversarial Attack
Chaoning Zhang
Philipp Benz
Chenguo Lin
Adil Karjauv
Jing Wu
In So Kweon
AAML
60
93
0
02 Mar 2021
Towards the Unification and Robustness of Perturbation and Gradient Based Explanations
Sushant Agarwal
S. Jabbari
Chirag Agarwal
Sohini Upadhyay
Zhiwei Steven Wu
Himabindu Lakkaraju
FAtt
AAML
66
63
0
21 Feb 2021
Explainable Artificial Intelligence Approaches: A Survey
Sheikh Rabiul Islam
W. Eberle
S. Ghafoor
Mohiuddin Ahmed
XAI
73
103
0
23 Jan 2021
Enhanced Regularizers for Attributional Robustness
A. Sarkar
Anirban Sarkar
V. Balasubramanian
41
16
0
28 Dec 2020
Towards Robust Explanations for Deep Neural Networks
Ann-Kathrin Dombrowski
Christopher J. Anders
K. Müller
Pan Kessel
FAtt
75
63
0
18 Dec 2020
A Survey on the Explainability of Supervised Machine Learning
Nadia Burkart
Marco F. Huber
FaML
XAI
53
774
0
16 Nov 2020
FAR: A General Framework for Attributional Robustness
Adam Ivankay
Ivan Girardi
Chiara Marchiori
P. Frossard
OOD
71
22
0
14 Oct 2020
How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural Networks
Thomas Fel
David Vigouroux
Rémi Cadène
Thomas Serre
XAI
FAtt
65
31
0
07 Sep 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
94
73
0
07 Aug 2020
Assessing the (Un)Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging
N. Arun
N. Gaw
P. Singh
Ken Chang
M. Aggarwal
...
J. Patel
M. Gidwani
Julius Adebayo
M. D. Li
Jayashree Kalpathy-Cramer
FAtt
79
110
0
06 Aug 2020
Adversarial Attacks against Face Recognition: A Comprehensive Study
Fatemeh Vakhshiteh
A. Nickabadi
Raghavendra Ramachandra
AAML
55
16
0
22 Jul 2020
A simple defense against adversarial attacks on heatmap explanations
Laura Rieger
Lars Kai Hansen
FAtt
AAML
76
37
0
13 Jul 2020
Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey
S. Silva
Peyman Najafirad
AAML
OOD
56
134
0
01 Jul 2020
Proper Network Interpretability Helps Adversarial Robustness in Classification
Akhilan Boopathy
Sijia Liu
Gaoyuan Zhang
Cynthia Liu
Pin-Yu Chen
Shiyu Chang
Luca Daniel
AAML
FAtt
116
66
0
26 Jun 2020
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
Arun Das
P. Rad
XAI
155
603
0
16 Jun 2020
On Saliency Maps and Adversarial Robustness
Puneet Mangla
Vedant Singh
V. Balasubramanian
AAML
26
17
0
14 Jun 2020
Evaluations and Methods for Explanation through Robustness Analysis
Cheng-Yu Hsieh
Chih-Kuan Yeh
Xuanqing Liu
Pradeep Ravikumar
Seungyeon Kim
Sanjiv Kumar
Cho-Jui Hsieh
XAI
50
58
0
31 May 2020
Adversarial Attacks and Defense on Texts: A Survey
A. Huq
Mst. Tasnim Pervin
AAML
130
21
0
28 May 2020
Universal Adversarial Perturbations: A Survey
Ashutosh Chaubey
Nikhil Agrawal
Kavya Barnwal
K. K. Guliani
Pramod Mehta
OOD
AAML
80
47
0
16 May 2020
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
92
224
0
01 May 2020
IROF: a low resource evaluation metric for explanation methods
Laura Rieger
Lars Kai Hansen
57
55
0
09 Mar 2020
Overfitting in adversarially robust deep learning
Leslie Rice
Eric Wong
Zico Kolter
108
804
0
26 Feb 2020
DANCE: Enhancing saliency maps using decoys
Y. Lu
Wenbo Guo
Masashi Sugiyama
William Stafford Noble
AAML
52
14
0
03 Feb 2020
"How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations
Himabindu Lakkaraju
Osbert Bastani
65
255
0
15 Nov 2019
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAtt
AAML
MLAU
77
819
0
06 Nov 2019
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI
Erico Tjoa
Cuntai Guan
XAI
106
1,447
0
17 Jul 2019
Improving performance of deep learning models with axiomatic attribution priors and expected gradients
G. Erion
Joseph D. Janizek
Pascal Sturmfels
Scott M. Lundberg
Su-In Lee
OOD
BDL
FAtt
61
81
0
25 Jun 2019
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAML
FAtt
81
332
0
19 Jun 2019
Robust Attribution Regularization
Jiefeng Chen
Xi Wu
Vaibhav Rastogi
Yingyu Liang
S. Jha
OOD
50
83
0
23 May 2019
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
Christian Etmann
Sebastian Lunz
Peter Maass
Carola-Bibiane Schönlieb
AAML
FAtt
58
162
0
10 May 2019
Towards a Robust Deep Neural Network in Texts: A Survey
Wenqi Wang
Benxiao Tang
Run Wang
Lina Wang
Aoshuang Ye
AAML
50
39
0
12 Feb 2019
1
2
Next