ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.01782
  4. Cited By
"Is your explanation stable?": A Robustness Evaluation Framework for
  Feature Attribution

"Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution

5 September 2022
Yuyou Gan
Yuhao Mao
Xuhong Zhang
S. Ji
Yuwen Pu
Meng Han
Jianwei Yin
Ting Wang
    FAttAAML
ArXiv (abs)PDFHTML

Papers citing ""Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution"

17 / 17 papers shown
Title
Probabilistic Stability Guarantees for Feature Attributions
Probabilistic Stability Guarantees for Feature Attributions
Helen Jin
Anton Xue
Weiqiu You
Surbhi Goel
Eric Wong
119
1
0
18 Apr 2025
NeuronFair: Interpretable White-Box Fairness Testing through Biased
  Neuron Identification
NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification
Haibin Zheng
Zhiqing Chen
Tianyu Du
Xuhong Zhang
Yao Cheng
S. Ji
Jingyi Wang
Yue Yu
Jinyin Chen
70
53
0
25 Dec 2021
Group-CAM: Group Score-Weighted Visual Explanations for Deep
  Convolutional Networks
Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks
Qing-Long Zhang
Lu Rao
Yubin Yang
47
58
0
25 Mar 2021
Interpreting Super-Resolution Networks with Local Attribution Maps
Interpreting Super-Resolution Networks with Local Attribution Maps
Jinjin Gu
Chao Dong
FAttSupR
60
220
0
22 Nov 2020
Black-box Explanation of Object Detectors via Saliency Maps
Black-box Explanation of Object Detectors via Saliency Maps
Vitali Petsiuk
R. Jain
Varun Manjunatha
Vlad I. Morariu
Ashutosh Mehra
Vicente Ordonez
Kate Saenko
FAtt
58
124
0
05 Jun 2020
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural
  Networks
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks
Mehdi Neshat
Zifan Wang
Bradley Alexander
Fan Yang
Zijian Zhang
Sirui Ding
Markus Wagner
Xia Hu
FAtt
93
1,074
0
03 Oct 2019
Explanations can be manipulated and geometry is to blame
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAMLFAtt
81
334
0
19 Jun 2019
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAttAAMLXAI
141
1,970
0
08 Oct 2018
RISE: Randomized Input Sampling for Explanation of Black-box Models
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
181
1,171
0
19 Jun 2018
The (Un)reliability of saliency methods
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAttXAI
101
687
0
02 Nov 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAttAAML
76
1,525
0
11 Apr 2017
The Cityscapes Dataset for Semantic Urban Scene Understanding
The Cityscapes Dataset for Semantic Urban Scene Understanding
Marius Cordts
Mohamed Omran
Sebastian Ramos
Timo Rehfeld
Markus Enzweiler
Rodrigo Benenson
Uwe Franke
Stefan Roth
Bernt Schiele
1.1K
11,641
0
06 Apr 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,027
0
16 Feb 2016
Learning Deep Features for Discriminative Localization
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSLSSegFAtt
250
9,326
0
14 Dec 2015
Sequence to Sequence Learning with Neural Networks
Sequence to Sequence Learning with Neural Networks
Ilya Sutskever
Oriol Vinyals
Quoc V. Le
AIMat
437
20,584
0
10 Sep 2014
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
312
7,308
0
20 Dec 2013
How to Explain Individual Classification Decisions
How to Explain Individual Classification Decisions
D. Baehrens
T. Schroeter
Stefan Harmeling
M. Kawanabe
K. Hansen
K. Müller
FAtt
137
1,104
0
06 Dec 2009
1