ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.08815
  4. Cited By
GRAPHSHAP: Explaining Identity-Aware Graph Classifiers Through the
  Language of Motifs
v1v2 (latest)

GRAPHSHAP: Explaining Identity-Aware Graph Classifiers Through the Language of Motifs

17 February 2022
Alan Perotti
P. Bajardi
Francesco Bonchi
Andre' Panisson
    FAtt
ArXiv (abs)PDFHTML

Papers citing "GRAPHSHAP: Explaining Identity-Aware Graph Classifiers Through the Language of Motifs"

14 / 14 papers shown
Title
Counterfactual Graphs for Explainable Classification of Brain Networks
Counterfactual Graphs for Explainable Classification of Brain Networks
Carlo Abrate
Francesco Bonchi
CML
73
56
0
16 Jun 2021
On Explainability of Graph Neural Networks via Subgraph Explorations
On Explainability of Graph Neural Networks via Subgraph Explorations
Hao Yuan
Haiyang Yu
Jie Wang
Kang Li
Shuiwang Ji
FAtt
83
392
0
09 Feb 2021
FairLens: Auditing Black-box Clinical Decision Support Systems
FairLens: Auditing Black-box Clinical Decision Support Systems
Cecilia Panigutti
Alan Perotti
Andre' Panisson
P. Bajardi
D. Pedreschi
69
68
0
08 Nov 2020
Interpreting Graph Neural Networks for NLP With Differentiable Edge
  Masking
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
Michael Schlichtkrull
Nicola De Cao
Ivan Titov
AI4CE
82
218
0
01 Oct 2020
Masked Label Prediction: Unified Message Passing Model for
  Semi-Supervised Classification
Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification
Yunsheng Shi
Zhengjie Huang
Shikun Feng
Hui Zhong
Wenjin Wang
Yu Sun
AI4CE
93
789
0
08 Sep 2020
True to the Model or True to the Data?
True to the Model or True to the Data?
Hugh Chen
Joseph D. Janizek
Scott M. Lundberg
Su-In Lee
TDIFAtt
150
166
0
29 Jun 2020
Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy
Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy
B. Shneiderman
58
703
0
10 Feb 2020
GraphLIME: Local Interpretable Model Explanations for Graph Neural
  Networks
GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks
Q. Huang
M. Yamada
Yuan Tian
Dinesh Singh
Dawei Yin
Yi-Ju Chang
FAtt
90
357
0
17 Jan 2020
Analysis of Explainers of Black Box Deep Neural Networks for Computer
  Vision: A Survey
Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey
Vanessa Buhrmester
David Münch
Michael Arens
MLAUFaMLXAIAAML
101
364
0
27 Nov 2019
Explainability Techniques for Graph Convolutional Networks
Explainability Techniques for Graph Convolutional Networks
Federico Baldassarre
Hossein Azizpour
GNNFAtt
174
270
0
31 May 2019
GNNExplainer: Generating Explanations for Graph Neural Networks
GNNExplainer: Generating Explanations for Graph Neural Networks
Rex Ying
Dylan Bourgeois
Jiaxuan You
Marinka Zitnik
J. Leskovec
LLMAG
150
1,328
0
10 Mar 2019
Graph Neural Networks: A Review of Methods and Applications
Graph Neural Networks: A Review of Methods and Applications
Jie Zhou
Ganqu Cui
Shengding Hu
Zhengyan Zhang
Cheng Yang
Zhiyuan Liu
Lifeng Wang
Changcheng Li
Maosong Sun
AI4CEGNN
1.1K
5,527
0
20 Dec 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
131
3,967
0
06 Feb 2018
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,027
0
16 Feb 2016
1