Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2505.02566
Cited By
Robustness questions the interpretability of graph neural networks: what to do?
5 May 2025
Kirill Lukyanov
Georgii Sazonov
Serafim Boyarsky
Ilya Makarov
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Robustness questions the interpretability of graph neural networks: what to do?"
28 / 28 papers shown
Title
Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
Haibin Zheng
Haiyang Xiong
Jinyin Chen
Hao-Shang Ma
Guohan Huang
104
31
0
25 Oct 2022
Global Concept-Based Interpretability for Graph Neural Networks via Neuron Analysis
Xuanyuan Han
Pietro Barbiero
Dobrik Georgiev
Lucie Charlotte Magister
Pietro Lio
MILM
72
41
0
22 Aug 2022
Conflicting Interactions Among Protection Mechanisms for Machine Learning Models
S. Szyller
Nadarajah Asokan
AAML
81
7
0
05 Jul 2022
A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability
Enyan Dai
Tianxiang Zhao
Huaisheng Zhu
Jun Xu
Zhimeng Guo
Hui Liu
Jiliang Tang
Suhang Wang
95
144
0
18 Apr 2022
Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation
Sixiao Zhang
Hongxu Chen
Xiangguo Sun
Yicong Li
Guandong Xu
AAML
SSL
85
43
0
20 Jan 2022
Connecting Interpretability and Robustness in Decision Trees through Separation
Michal Moshkovitz
Yao-Yuan Yang
Kamalika Chaudhuri
70
23
0
14 Feb 2021
On Explainability of Graph Neural Networks via Subgraph Explorations
Hao Yuan
Haiyang Yu
Jie Wang
Kang Li
Shuiwang Ji
FAtt
83
395
0
09 Feb 2021
CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
Ana Lucic
Maartje ter Hoeve
Gabriele Tolomei
Maarten de Rijke
Fabrizio Silvestri
199
146
0
05 Feb 2021
Membership Inference Attack on Graph Neural Networks
Iyiola E. Olatunji
Wolfgang Nejdl
Megha Khosla
AAML
119
102
0
17 Jan 2021
Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review
Sahil Verma
Varich Boonsanong
Minh Hoang
Keegan E. Hines
John P. Dickerson
Chirag Shah
CML
161
175
0
20 Oct 2020
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
Michael Schlichtkrull
Nicola De Cao
Ivan Titov
AI4CE
129
220
0
01 Oct 2020
GNNGuard: Defending Graph Neural Networks against Adversarial Attacks
Xiang Zhang
Marinka Zitnik
AAML
100
297
0
15 Jun 2020
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAML
FAtt
88
335
0
19 Jun 2019
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
89
79
0
27 May 2019
GNNExplainer: Generating Explanations for Graph Neural Networks
Rex Ying
Dylan Bourgeois
Jiaxuan You
Marinka Zitnik
J. Leskovec
LLMAG
155
1,334
0
10 Mar 2019
Fast Graph Representation Learning with PyTorch Geometric
Matthias Fey
J. E. Lenssen
3DH
GNN
3DPC
256
4,371
0
06 Mar 2019
Graph Neural Networks: A Review of Methods and Applications
Jie Zhou
Ganqu Cui
Shengding Hu
Zhengyan Zhang
Cheng Yang
Zhiyuan Liu
Lifeng Wang
Changcheng Li
Maosong Sun
AI4CE
GNN
1.1K
5,551
0
20 Dec 2018
Adversarial Attacks on Neural Networks for Graph Data
Daniel Zügner
Amir Akbarnejad
Stephan Günnemann
GNN
AAML
OOD
173
1,072
0
21 May 2018
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
155
3,989
0
06 Feb 2018
Countering Adversarial Images using Input Transformations
Chuan Guo
Mayank Rana
Moustapha Cissé
Laurens van der Maaten
AAML
141
1,407
0
31 Oct 2017
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
261
4,287
0
22 Jun 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
321
12,151
0
19 Jun 2017
MagNet: a Two-Pronged Defense against Adversarial Examples
Dongyu Meng
Hao Chen
AAML
56
1,209
0
25 May 2017
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
420
3,824
0
28 Feb 2017
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
17,092
0
16 Feb 2016
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Nicolas Papernot
Patrick McDaniel
Xi Wu
S. Jha
A. Swami
AAML
120
3,078
0
14 Nov 2015
Image-based Recommendations on Styles and Substitutes
Julian McAuley
C. Targett
Javen Qinfeng Shi
Anton Van Den Hengel
132
2,415
0
15 Jun 2015
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
282
19,145
0
20 Dec 2014
1