ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.10547
  4. Cited By
Interpretation of Neural Networks is Fragile

Interpretation of Neural Networks is Fragile

29 October 2017
Amirata Ghorbani
Abubakar Abid
James Zou
    FAtt
    AAML
ArXivPDFHTML

Papers citing "Interpretation of Neural Networks is Fragile"

50 / 467 papers shown
Title
Evaluation of Neural Networks Defenses and Attacks using NDCG and
  Reciprocal Rank Metrics
Evaluation of Neural Networks Defenses and Attacks using NDCG and Reciprocal Rank Metrics
Haya Brama
L. Dery
Tal Grinshpoun
AAML
19
7
0
10 Jan 2022
Topological Representations of Local Explanations
Topological Representations of Local Explanations
Peter Xenopoulos
G. Chan
Harish Doraiswamy
L. G. Nonato
Brian Barr
Claudio Silva
FAtt
28
4
0
06 Jan 2022
GPEX, A Framework For Interpreting Artificial Neural Networks
GPEX, A Framework For Interpreting Artificial Neural Networks
Amir Akbarnejad
G. Bigras
Nilanjan Ray
47
4
0
18 Dec 2021
Temporal-Spatial Causal Interpretations for Vision-Based Reinforcement
  Learning
Temporal-Spatial Causal Interpretations for Vision-Based Reinforcement Learning
Wenjie Shi
Gao Huang
Shiji Song
Cheng Wu
31
9
0
06 Dec 2021
Multi-objective Explanations of GNN Predictions
Multi-objective Explanations of GNN Predictions
Yifei Liu
Chao Chen
Yazheng Liu
Xi Zhang
Sihong Xie
18
13
0
29 Nov 2021
Improving Deep Learning Interpretability by Saliency Guided Training
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
S. Feizi
FAtt
20
80
0
29 Nov 2021
Selective Ensembles for Consistent Predictions
Selective Ensembles for Consistent Predictions
Emily Black
Klas Leino
Matt Fredrikson
20
21
0
16 Nov 2021
Statistical Perspectives on Reliability of Artificial Intelligence
  Systems
Statistical Perspectives on Reliability of Artificial Intelligence Systems
Yili Hong
J. Lian
Li Xu
Jie Min
Yueyao Wang
Laura J. Freeman
Xinwei Deng
30
30
0
09 Nov 2021
Defense Against Explanation Manipulation
Defense Against Explanation Manipulation
Ruixiang Tang
Ninghao Liu
Fan Yang
Na Zou
Xia Hu
AAML
44
11
0
08 Nov 2021
Look at the Variance! Efficient Black-box Explanations with Sobol-based
  Sensitivity Analysis
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
Thomas Fel
Rémi Cadène
Mathieu Chalvidal
Matthieu Cord
David Vigouroux
Thomas Serre
MLAU
FAtt
AAML
117
58
0
07 Nov 2021
Callee: Recovering Call Graphs for Binaries with Transfer and
  Contrastive Learning
Callee: Recovering Call Graphs for Binaries with Transfer and Contrastive Learning
Wenyu Zhu
Zhiyao Feng
Zihan Zhang
Jian-jun Chen
Zhijian Ou
Min Yang
Chao Zhang
AAML
14
8
0
02 Nov 2021
Transparency of Deep Neural Networks for Medical Image Analysis: A
  Review of Interpretability Methods
Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods
Zohaib Salahuddin
Henry C. Woodruff
A. Chatterjee
Philippe Lambin
24
303
0
01 Nov 2021
A Survey on the Robustness of Feature Importance and Counterfactual
  Explanations
A Survey on the Robustness of Feature Importance and Counterfactual Explanations
Saumitra Mishra
Sanghamitra Dutta
Jason Long
Daniele Magazzeni
AAML
14
58
0
30 Oct 2021
On the explainability of hospitalization prediction on a large COVID-19
  patient dataset
On the explainability of hospitalization prediction on a large COVID-19 patient dataset
Ivan Girardi
P. Vagenas
Dario Arcos-Díaz
Lydia Bessaï
Alexandra Büsser
...
R. Furlan
Mauro Gatti
Andrea Giovannini
Ellen Hoeven
Chiara Marchiori
FAtt
24
3
0
28 Oct 2021
Provably Robust Model-Centric Explanations for Critical Decision-Making
Provably Robust Model-Centric Explanations for Critical Decision-Making
Cecilia G. Morales
Nick Gisolfi
R. Edman
J. K. Miller
A. Dubrawski
16
0
0
26 Oct 2021
Coalitional Bayesian Autoencoders -- Towards explainable unsupervised
  deep learning
Coalitional Bayesian Autoencoders -- Towards explainable unsupervised deep learning
Bang Xiang Yong
Alexandra Brintrup
21
6
0
19 Oct 2021
The Irrationality of Neural Rationale Models
The Irrationality of Neural Rationale Models
Yiming Zheng
Serena Booth
J. Shah
Yilun Zhou
35
16
0
14 Oct 2021
Making Corgis Important for Honeycomb Classification: Adversarial
  Attacks on Concept-based Explainability Tools
Making Corgis Important for Honeycomb Classification: Adversarial Attacks on Concept-based Explainability Tools
Davis Brown
Henry Kvinge
AAML
45
7
0
14 Oct 2021
CloudPred: Predicting Patient Phenotypes From Single-cell RNA-seq
CloudPred: Predicting Patient Phenotypes From Single-cell RNA-seq
B. He
M. Thomson
Meena Subramaniam
Richard K. Perez
Chun Jimmie Ye
J. Zou
6
15
0
13 Oct 2021
Implicit Bias of Linear Equivariant Networks
Implicit Bias of Linear Equivariant Networks
Hannah Lawrence
Kristian Georgiev
A. Dienes
B. Kiani
AI4CE
40
14
0
12 Oct 2021
Consistent Counterfactuals for Deep Models
Consistent Counterfactuals for Deep Models
Emily Black
Zifan Wang
Matt Fredrikson
Anupam Datta
BDL
OffRL
OOD
55
43
0
06 Oct 2021
NEWRON: A New Generalization of the Artificial Neuron to Enhance the
  Interpretability of Neural Networks
NEWRON: A New Generalization of the Artificial Neuron to Enhance the Interpretability of Neural Networks
F. Siciliano
Maria Sofia Bucarelli
Gabriele Tolomei
Fabrizio Silvestri
GNN
AI4CE
22
6
0
05 Oct 2021
AdjointBackMapV2: Precise Reconstruction of Arbitrary CNN Unit's
  Activation via Adjoint Operators
AdjointBackMapV2: Precise Reconstruction of Arbitrary CNN Unit's Activation via Adjoint Operators
Qing Wan
Siu Wun Cheung
Yoonsuck Choe
29
0
0
04 Oct 2021
Trustworthy AI: From Principles to Practices
Trustworthy AI: From Principles to Practices
Bo-wen Li
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
119
356
0
04 Oct 2021
Adversarial Regression with Doubly Non-negative Weighting Matrices
Adversarial Regression with Doubly Non-negative Weighting Matrices
Tam Le
Truyen V. Nguyen
M. Yamada
Jose H. Blanchet
Viet Anh Nguyen
27
5
0
30 Sep 2021
Deep neural networks with controlled variable selection for the
  identification of putative causal genetic variants
Deep neural networks with controlled variable selection for the identification of putative causal genetic variants
P. H. Kassani
Fred Lu
Yann Le Guen
Zihuai He
18
12
0
29 Sep 2021
Discriminative Attribution from Counterfactuals
Discriminative Attribution from Counterfactuals
N. Eckstein
A. S. Bates
G. Jefferis
Jan Funke
FAtt
CML
27
1
0
28 Sep 2021
DeepAID: Interpreting and Improving Deep Learning-based Anomaly
  Detection in Security Applications
DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications
Dongqi Han
Zhiliang Wang
Wenqi Chen
Ying Zhong
Su Wang
Han Zhang
Jiahai Yang
Xingang Shi
Xia Yin
AAML
24
76
0
23 Sep 2021
Ranking Feature-Block Importance in Artificial Multiblock Neural
  Networks
Ranking Feature-Block Importance in Artificial Multiblock Neural Networks
Anna Jenul
Stefan Schrunner
B. Huynh
Runar Helin
C. Futsaether
K. H. Liland
O. Tomic
FAtt
24
1
0
21 Sep 2021
FUTURE-AI: Guiding Principles and Consensus Recommendations for
  Trustworthy Artificial Intelligence in Medical Imaging
FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging
Karim Lekadira
Richard Osuala
C. Gallin
Noussair Lazrak
Kaisar Kushibar
...
Nickolas Papanikolaou
Zohaib Salahuddin
Henry C. Woodruff
Philippe Lambin
L. Martí-Bonmatí
AI4TS
71
56
0
20 Sep 2021
Self-learn to Explain Siamese Networks Robustly
Self-learn to Explain Siamese Networks Robustly
Chao Chen
Yifan Shen
Guixiang Ma
Xiangnan Kong
S. Rangarajan
Xi Zhang
Sihong Xie
46
5
0
15 Sep 2021
Rationales for Sequential Predictions
Rationales for Sequential Predictions
Keyon Vafa
Yuntian Deng
David M. Blei
Alexander M. Rush
12
33
0
14 Sep 2021
Logic Traps in Evaluating Attribution Scores
Logic Traps in Evaluating Attribution Scores
Yiming Ju
Yuanzhe Zhang
Zhao Yang
Zhongtao Jiang
Kang Liu
Jun Zhao
XAI
FAtt
30
18
0
12 Sep 2021
EG-Booster: Explanation-Guided Booster of ML Evasion Attacks
EG-Booster: Explanation-Guided Booster of ML Evasion Attacks
Abderrahmen Amich
Birhanu Eshete
AAML
11
8
0
31 Aug 2021
Enjoy the Salience: Towards Better Transformer-based Faithful
  Explanations with Word Salience
Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
G. Chrysostomou
Nikolaos Aletras
32
16
0
31 Aug 2021
Finding Representative Interpretations on Convolutional Neural Networks
Finding Representative Interpretations on Convolutional Neural Networks
P. C. Lam
Lingyang Chu
Maxim Torgonskiy
J. Pei
Yong Zhang
Lanjun Wang
FAtt
SSL
HAI
27
6
0
13 Aug 2021
Jujutsu: A Two-stage Defense against Adversarial Patch Attacks on Deep
  Neural Networks
Jujutsu: A Two-stage Defense against Adversarial Patch Attacks on Deep Neural Networks
Zitao Chen
Pritam Dash
Karthik Pattabiraman
AAML
24
18
0
11 Aug 2021
Perturbing Inputs for Fragile Interpretations in Deep Natural Language
  Processing
Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing
Sanchit Sinha
Hanjie Chen
Arshdeep Sekhon
Yangfeng Ji
Yanjun Qi
AAML
FAtt
28
42
0
11 Aug 2021
Harnessing value from data science in business: ensuring explainability
  and fairness of solutions
Harnessing value from data science in business: ensuring explainability and fairness of solutions
Krzysztof Chomiak
Michal Miktus
13
0
0
10 Aug 2021
Explainable AI and susceptibility to adversarial attacks: a case study
  in classification of breast ultrasound images
Explainable AI and susceptibility to adversarial attacks: a case study in classification of breast ultrasound images
Hamza Rasaee
H. Rivaz
AAML
18
18
0
09 Aug 2021
Jointly Attacking Graph Neural Network and its Explanations
Jointly Attacking Graph Neural Network and its Explanations
Wenqi Fan
Wei Jin
Xiaorui Liu
Han Xu
Xianfeng Tang
Suhang Wang
Qing Li
Jiliang Tang
Jianping Wang
Charu C. Aggarwal
AAML
42
28
0
07 Aug 2021
Resisting Out-of-Distribution Data Problem in Perturbation of XAI
Resisting Out-of-Distribution Data Problem in Perturbation of XAI
Luyu Qiu
Yi Yang
Caleb Chen Cao
Jing Liu
Yueyuan Zheng
H. Ngai
J. H. Hsiao
Lei Chen
9
18
0
27 Jul 2021
Robust Explainability: A Tutorial on Gradient-Based Attribution Methods
  for Deep Neural Networks
Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks
Ian E. Nielsen
Dimah Dera
Ghulam Rasool
N. Bouaynaya
R. Ramachandran
FAtt
20
79
0
23 Jul 2021
Trustworthy AI: A Computational Perspective
Trustworthy AI: A Computational Perspective
Haochen Liu
Yiqi Wang
Wenqi Fan
Xiaorui Liu
Yaxin Li
Shaili Jain
Yunhao Liu
Anil K. Jain
Jiliang Tang
FaML
104
196
0
12 Jul 2021
Robust Counterfactual Explanations on Graph Neural Networks
Robust Counterfactual Explanations on Graph Neural Networks
Mohit Bajaj
Lingyang Chu
Zihui Xue
J. Pei
Lanjun Wang
P. C. Lam
Yong Zhang
OOD
43
96
0
08 Jul 2021
When and How to Fool Explainable Models (and Humans) with Adversarial
  Examples
When and How to Fool Explainable Models (and Humans) with Adversarial Examples
Jon Vadillo
Roberto Santana
Jose A. Lozano
SILM
AAML
36
12
0
05 Jul 2021
Certifiably Robust Interpretation via Renyi Differential Privacy
Certifiably Robust Interpretation via Renyi Differential Privacy
Ao Liu
Xiaoyu Chen
Sijia Liu
Lirong Xia
Chuang Gan
AAML
19
11
0
04 Jul 2021
Explanation-Guided Diagnosis of Machine Learning Evasion Attacks
Explanation-Guided Diagnosis of Machine Learning Evasion Attacks
Abderrahmen Amich
Birhanu Eshete
AAML
17
10
0
30 Jun 2021
On Locality of Local Explanation Models
On Locality of Local Explanation Models
Sahra Ghalebikesabi
Lucile Ter-Minassian
Karla Diaz-Ordaz
Chris Holmes
FedML
FAtt
26
39
0
24 Jun 2021
Guided Integrated Gradients: An Adaptive Path Method for Removing Noise
Guided Integrated Gradients: An Adaptive Path Method for Removing Noise
A. Kapishnikov
Subhashini Venugopalan
Besim Avci
Benjamin D. Wedin
Michael Terry
Tolga Bolukbasi
30
90
0
17 Jun 2021
Previous
123...1056789
Next