Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2209.01782
Cited By
"Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution
5 September 2022
Yuyou Gan
Yuhao Mao
Xuhong Zhang
S. Ji
Yuwen Pu
Meng Han
Jianwei Yin
Ting Wang
FAtt
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
""Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution"
17 / 17 papers shown
Title
Probabilistic Stability Guarantees for Feature Attributions
Helen Jin
Anton Xue
Weiqiu You
Surbhi Goel
Eric Wong
119
1
0
18 Apr 2025
NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification
Haibin Zheng
Zhiqing Chen
Tianyu Du
Xuhong Zhang
Yao Cheng
S. Ji
Jingyi Wang
Yue Yu
Jinyin Chen
70
53
0
25 Dec 2021
Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks
Qing-Long Zhang
Lu Rao
Yubin Yang
47
58
0
25 Mar 2021
Interpreting Super-Resolution Networks with Local Attribution Maps
Jinjin Gu
Chao Dong
FAtt
SupR
60
220
0
22 Nov 2020
Black-box Explanation of Object Detectors via Saliency Maps
Vitali Petsiuk
R. Jain
Varun Manjunatha
Vlad I. Morariu
Ashutosh Mehra
Vicente Ordonez
Kate Saenko
FAtt
58
124
0
05 Jun 2020
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks
Mehdi Neshat
Zifan Wang
Bradley Alexander
Fan Yang
Zijian Zhang
Sirui Ding
Markus Wagner
Xia Hu
FAtt
93
1,074
0
03 Oct 2019
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAML
FAtt
81
334
0
19 Jun 2019
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
141
1,970
0
08 Oct 2018
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
181
1,171
0
19 Jun 2018
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
101
687
0
02 Nov 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAtt
AAML
76
1,525
0
11 Apr 2017
The Cityscapes Dataset for Semantic Urban Scene Understanding
Marius Cordts
Mohamed Omran
Sebastian Ramos
Timo Rehfeld
Markus Enzweiler
Rodrigo Benenson
Uwe Franke
Stefan Roth
Bernt Schiele
1.1K
11,641
0
06 Apr 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
17,027
0
16 Feb 2016
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSL
SSeg
FAtt
250
9,326
0
14 Dec 2015
Sequence to Sequence Learning with Neural Networks
Ilya Sutskever
Oriol Vinyals
Quoc V. Le
AIMat
437
20,584
0
10 Sep 2014
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
312
7,308
0
20 Dec 2013
How to Explain Individual Classification Decisions
D. Baehrens
T. Schroeter
Stefan Harmeling
M. Kawanabe
K. Hansen
K. Müller
FAtt
137
1,104
0
06 Dec 2009
1