ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.03674
  4. Cited By
Explainability-based Backdoor Attacks Against Graph Neural Networks

Explainability-based Backdoor Attacks Against Graph Neural Networks

8 April 2021
Jing Xu
Minhui Xue
Xue
S. Picek
ArXivPDFHTML

Papers citing "Explainability-based Backdoor Attacks Against Graph Neural Networks"

43 / 43 papers shown
Title
Adaptive Backdoor Attacks with Reasonable Constraints on Graph Neural Networks
Xuewen Dong
Jiachen Li
Shujun Li
Zhichao You
Qiang Qu
Yaroslav Kholodov
Yulong Shen
AAML
43
0
0
12 Mar 2025
Graph Neural Backdoor: Fundamentals, Methodologies, Applications, and Future Directions
Graph Neural Backdoor: Fundamentals, Methodologies, Applications, and Future Directions
Xiao Yang
Gaolei Li
Jianhua Li
AAML
AI4CE
51
1
0
08 Jan 2025
DMGNN: Detecting and Mitigating Backdoor Attacks in Graph Neural
  Networks
DMGNN: Detecting and Mitigating Backdoor Attacks in Graph Neural Networks
Hao Sui
Bing Chen
J. Zhang
Chengcheng Zhu
Di Wu
Qinghua Lu
Guodong Long
AAML
31
1
0
18 Oct 2024
Defense-as-a-Service: Black-box Shielding against Backdoored Graph
  Models
Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models
Xiao Yang
Kai Zhou
Y. Lai
Gaolei Li
AAML
36
0
0
07 Oct 2024
"No Matter What You Do": Purifying GNN Models via Backdoor Unlearning
"No Matter What You Do": Purifying GNN Models via Backdoor Unlearning
Jiale Zhang
Chengcheng Zhu
Bosen Rao
Hao Sui
Xiaobing Sun
Bing Chen
Chunyi Zhou
Shouling Ji
AAML
38
0
0
02 Oct 2024
Context is the Key: Backdoor Attacks for In-Context Learning with Vision
  Transformers
Context is the Key: Backdoor Attacks for In-Context Learning with Vision Transformers
Gorka Abad
S. Picek
Lorenzo Cavallaro
A. Urbieta
SILM
44
0
0
06 Sep 2024
Krait: A Backdoor Attack Against Graph Prompt Tuning
Krait: A Backdoor Attack Against Graph Prompt Tuning
Ying Song
Rita Singh
Balaji Palanisamy
AAML
58
0
0
18 Jul 2024
Backdoor Graph Condensation
Backdoor Graph Condensation
Jiahao Wu
Ning Lu
Zeiyu Dai
Kun Wang
Wenqi Fan
Shengcai Liu
Qing Li
Ke Tang
AAML
DD
71
6
0
03 Jul 2024
E-SAGE: Explainability-based Defense Against Backdoor Attacks on Graph
  Neural Networks
E-SAGE: Explainability-based Defense Against Backdoor Attacks on Graph Neural Networks
Dingqiang Yuan
Xiaohua Xu
Lei Yu
Tongchang Han
Rongchang Li
Meng Han
AAML
32
1
0
15 Jun 2024
GENIE: Watermarking Graph Neural Networks for Link Prediction
GENIE: Watermarking Graph Neural Networks for Link Prediction
Venkata Sai Pranav Bachina
Ankit Gangwal
Aaryan Ajay Sharma
Charu Sharma
50
1
0
07 Jun 2024
A Clean-graph Backdoor Attack against Graph Convolutional Networks with
  Poisoned Label Only
A Clean-graph Backdoor Attack against Graph Convolutional Networks with Poisoned Label Only
Jiazhu Dai
Haoyu Sun
AAML
47
2
0
19 Apr 2024
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Ziyao Liu
Huanyi Ye
Chen Chen
Yongsen Zheng
K. Lam
AAML
MU
35
28
0
20 Mar 2024
Effective backdoor attack on graph neural networks in link prediction tasks
Effective backdoor attack on graph neural networks in link prediction tasks
Jiazhu Dai
Haoyu Sun
GNN
61
3
0
05 Jan 2024
A clean-label graph backdoor attack method in node classification task
A clean-label graph backdoor attack method in node classification task
Xiaogang Xing
Ming Xu
Yujing Bai
Dongdong Yang
AAML
91
7
0
30 Dec 2023
Explainability-Based Adversarial Attack on Graphs Through Edge
  Perturbation
Explainability-Based Adversarial Attack on Graphs Through Edge Perturbation
Dibaloke Chanda
Saba Heidari Gheshlaghi
Nasim Yahya Soltani
AAML
17
0
0
28 Dec 2023
Data and Model Poisoning Backdoor Attacks on Wireless Federated
  Learning, and the Defense Mechanisms: A Comprehensive Survey
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey
Yichen Wan
Youyang Qu
Wei Ni
Yong Xiang
Longxiang Gao
Ekram Hossain
AAML
52
33
0
14 Dec 2023
Balancing Transparency and Risk: The Security and Privacy Risks of
  Open-Source Machine Learning Models
Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models
Dominik Hintersdorf
Lukas Struppek
Kristian Kersting
SILM
27
4
0
18 Aug 2023
XGBD: Explanation-Guided Graph Backdoor Detection
XGBD: Explanation-Guided Graph Backdoor Detection
Zihan Guan
Mengnan Du
Ninghao Liu
AAML
32
9
0
08 Aug 2023
A Survey on Graph Neural Networks for Time Series: Forecasting,
  Classification, Imputation, and Anomaly Detection
A Survey on Graph Neural Networks for Time Series: Forecasting, Classification, Imputation, and Anomaly Detection
Ming Jin
Huan Yee Koh
Qingsong Wen
Daniele Zambon
Cesare Alippi
G. I. Webb
Irwin King
Shirui Pan
AI4TS
AI4CE
44
143
0
07 Jul 2023
Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated
  Graph Neural Network
Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural Network
F. Liu
Siqi Lai
Yansong Ning
Hao Liu
AAML
FedML
29
3
0
17 Jun 2023
Rethinking the Trigger-injecting Position in Graph Backdoor Attack
Rethinking the Trigger-injecting Position in Graph Backdoor Attack
Jing Xu
Gorka Abad
S. Picek
LLMSV
SILM
27
6
0
05 Apr 2023
Illuminati: Towards Explaining Graph Neural Networks for Cybersecurity
  Analysis
Illuminati: Towards Explaining Graph Neural Networks for Cybersecurity Analysis
Haoyu He
Yuede Ji
H. H. Huang
27
20
0
26 Mar 2023
SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in
  Image Classification
SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in Image Classification
Gorka Abad
Jing Xu
Stefanos Koffas
Behrad Tajalli
S. Picek
Mauro Conti
AAML
63
5
0
03 Feb 2023
Rickrolling the Artist: Injecting Backdoors into Text Encoders for
  Text-to-Image Synthesis
Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis
Lukas Struppek
Dominik Hintersdorf
Kristian Kersting
SILM
22
36
0
04 Nov 2022
Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks
  via Motifs
Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
Haibin Zheng
Haiyang Xiong
Jinyin Chen
Hao-Shang Ma
Guohan Huang
47
28
0
25 Oct 2022
Defending Against Backdoor Attack on Graph Nerual Network by
  Explainability
Defending Against Backdoor Attack on Graph Nerual Network by Explainability
B. Jiang
Zhao Li
AAML
GNN
64
16
0
07 Sep 2022
SoK: Explainable Machine Learning for Computer Security Applications
SoK: Explainable Machine Learning for Computer Security Applications
A. Nadeem
D. Vos
Clinton Cao
Luca Pajola
Simon Dieck
Robert Baumgartner
S. Verwer
34
40
0
22 Aug 2022
Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection
Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection
Haibin Zheng
Haiyang Xiong
Hao-Shang Ma
Guohan Huang
Jinyin Chen
37
13
0
14 Aug 2022
A Survey of Trustworthy Graph Learning: Reliability, Explainability, and
  Privacy Protection
A Survey of Trustworthy Graph Learning: Reliability, Explainability, and Privacy Protection
Bingzhe Wu
Jintang Li
Junchi Yu
Yatao Bian
Hengtong Zhang
...
Guangyu Sun
Peng Cui
Zibin Zheng
Zhe Liu
P. Zhao
OOD
39
25
0
20 May 2022
Trustworthy Graph Neural Networks: Aspects, Methods and Trends
Trustworthy Graph Neural Networks: Aspects, Methods and Trends
He Zhang
Bang Wu
Xingliang Yuan
Shirui Pan
Hanghang Tong
Jian Pei
45
104
0
16 May 2022
Recent Advances in Reliable Deep Graph Learning: Inherent Noise,
  Distribution Shift, and Adversarial Attack
Recent Advances in Reliable Deep Graph Learning: Inherent Noise, Distribution Shift, and Adversarial Attack
Jintang Li
Bingzhe Wu
Chengbin Hou
Guoji Fu
Yatao Bian
Liang Chen
Junzhou Huang
Zibin Zheng
OOD
AAML
32
6
0
15 Feb 2022
More is Better (Mostly): On the Backdoor Attacks in Federated Graph
  Neural Networks
More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks
Jing Xu
Rui Wang
Stefanos Koffas
K. Liang
S. Picek
FedML
AAML
36
25
0
07 Feb 2022
Statically Detecting Adversarial Malware through Randomised Chaining
Statically Detecting Adversarial Malware through Randomised Chaining
Matthew Crawford
Wei Wang
Ruoxi Sun
Minhui Xue
AAML
26
1
0
28 Nov 2021
Dissecting Malware in the Wild
Dissecting Malware in the Wild
H. Spencer
Wei Wang
Ruoxi Sun
Minhui Xue
11
1
0
28 Nov 2021
Adversarial Attacks on Graph Classification via Bayesian Optimisation
Adversarial Attacks on Graph Classification via Bayesian Optimisation
Xingchen Wan
Henry Kenlay
Binxin Ru
Arno Blaas
Michael A. Osborne
Xiaowen Dong
AAML
29
12
0
04 Nov 2021
Watermarking Graph Neural Networks based on Backdoor Attacks
Watermarking Graph Neural Networks based on Backdoor Attacks
Jing Xu
Stefanos Koffas
Oguzhan Ersoy
S. Picek
AAML
14
28
0
21 Oct 2021
Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction
Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction
Jinyin Chen
Haiyang Xiong
Haibin Zheng
Jian Zhang
Guodong Jiang
Yi Liu
AAML
SILM
AI4CE
51
10
0
08 Oct 2021
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers
Stefanos Koffas
Jing Xu
Mauro Conti
S. Picek
AAML
22
66
0
30 Jul 2021
EGC2: Enhanced Graph Classification with Easy Graph Compression
EGC2: Enhanced Graph Classification with Easy Graph Compression
Jinyin Chen
Haiyang Xiong
Haibin Zheng
Dunjie Zhang
Jian Zhang
Mingwei Jia
Yi Liu
AAML
26
15
0
16 Jul 2021
Accumulative Poisoning Attacks on Real-time Data
Accumulative Poisoning Attacks on Real-time Data
Tianyu Pang
Xiao Yang
Yinpeng Dong
Hang Su
Jun Zhu
32
20
0
18 Jun 2021
Oriole: Thwarting Privacy against Trustworthy Deep Learning Models
Oriole: Thwarting Privacy against Trustworthy Deep Learning Models
Liuqiao Chen
Hu Wang
Benjamin Zi Hao Zhao
Minhui Xue
Hai-feng Qian
PICV
19
4
0
23 Feb 2021
Model Extraction Attacks on Graph Neural Networks: Taxonomy and
  Realization
Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization
Bang Wu
Xiangwen Yang
Shirui Pan
Xingliang Yuan
MIACV
MLAU
55
53
0
24 Oct 2020
With Great Dispersion Comes Greater Resilience: Efficient Poisoning
  Attacks and Defenses for Linear Regression Models
With Great Dispersion Comes Greater Resilience: Efficient Poisoning Attacks and Defenses for Linear Regression Models
Jialin Wen
Benjamin Zi Hao Zhao
Minhui Xue
Alina Oprea
Hai-feng Qian
AAML
8
19
0
21 Jun 2020
1