ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.11165
  4. Cited By
Backdoor Attacks to Graph Neural Networks

Backdoor Attacks to Graph Neural Networks

19 June 2020
Zaixi Zhang
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
    GNN
ArXivPDFHTML

Papers citing "Backdoor Attacks to Graph Neural Networks"

50 / 52 papers shown
Title
Adaptive Backdoor Attacks with Reasonable Constraints on Graph Neural Networks
Xuewen Dong
Jiachen Li
Shujun Li
Zhichao You
Qiang Qu
Yaroslav Kholodov
Yulong Shen
AAML
40
0
0
12 Mar 2025
MADE: Graph Backdoor Defense with Masked Unlearning
MADE: Graph Backdoor Defense with Masked Unlearning
Xiao Lin amd Mingjie Li
Mingjie Li
Yisen Wang
AAML
95
1
0
03 Jan 2025
Backdoor Attack on Vertical Federated Graph Neural Network Learning
Backdoor Attack on Vertical Federated Graph Neural Network Learning
Jirui Yang
Peng Chen
Zhihui Lu
Ruijun Deng
Qiang Duan
Jianping Zeng
AAML
FedML
138
0
0
15 Oct 2024
CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models
CAT: Concept-level backdoor ATtacks for Concept Bottleneck Models
Songning Lai
Jiayu Yang
Yu Huang
Lijie Hu
Tianlang Xue
Zhangyi Hu
Jiaxu Li
Haicheng Liao
Yutao Yue
34
1
0
07 Oct 2024
Krait: A Backdoor Attack Against Graph Prompt Tuning
Krait: A Backdoor Attack Against Graph Prompt Tuning
Ying Song
Rita Singh
Balaji Palanisamy
AAML
55
0
0
18 Jul 2024
Backdoor Graph Condensation
Backdoor Graph Condensation
Jiahao Wu
Ning Lu
Zeiyu Dai
Kun Wang
Wenqi Fan
Shengcai Liu
Qing Li
Ke Tang
AAML
DD
69
5
0
03 Jul 2024
Link Stealing Attacks Against Inductive Graph Neural Networks
Link Stealing Attacks Against Inductive Graph Neural Networks
Yixin Wu
Xinlei He
Pascal Berrang
Mathias Humbert
Michael Backes
Neil Zhenqiang Gong
Yang Zhang
36
2
0
09 May 2024
A Survey of Graph Neural Networks in Real world: Imbalance, Noise,
  Privacy and OOD Challenges
A Survey of Graph Neural Networks in Real world: Imbalance, Noise, Privacy and OOD Challenges
Wei Ju
Siyu Yi
Yifan Wang
Zhiping Xiao
Zhengyan Mao
...
Senzhang Wang
Xinwang Liu
Xiao Luo
Philip S. Yu
Ming Zhang
AI4CE
36
35
0
07 Mar 2024
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Hongbin Liu
Michael K. Reiter
Neil Zhenqiang Gong
AAML
33
2
0
22 Feb 2024
Use of Graph Neural Networks in Aiding Defensive Cyber Operations
Use of Graph Neural Networks in Aiding Defensive Cyber Operations
Shaswata Mitra
Trisha Chakraborty
Subash Neupane
Aritran Piplai
Sudip Mittal
AAML
42
3
0
11 Jan 2024
XGBD: Explanation-Guided Graph Backdoor Detection
XGBD: Explanation-Guided Graph Backdoor Detection
Zihan Guan
Mengnan Du
Ninghao Liu
AAML
29
9
0
08 Aug 2023
An Equivariant Generative Framework for Molecular Graph-Structure
  Co-Design
An Equivariant Generative Framework for Molecular Graph-Structure Co-Design
Zaixin Zhang
Qi Liu
Cheekong Lee
Chang-Yu Hsieh
Enhong Chen
19
18
0
12 Apr 2023
A Comprehensive Survey on Deep Graph Representation Learning
A Comprehensive Survey on Deep Graph Representation Learning
Wei Ju
Zheng Fang
Yiyang Gu
Zequn Liu
Qingqing Long
...
Jingyang Yuan
Yusheng Zhao
Yifan Wang
Xiao Luo
Ming Zhang
GNN
AI4TS
51
141
0
11 Apr 2023
Graph Neural Networks for Hardware Vulnerability Analysis -- Can you
  Trust your GNN?
Graph Neural Networks for Hardware Vulnerability Analysis -- Can you Trust your GNN?
Lilas Alrahis
Ozgur Sinanoglu
25
2
0
29 Mar 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
33
20
0
14 Feb 2023
SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in
  Image Classification
SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in Image Classification
Gorka Abad
Jing Xu
Stefanos Koffas
Behrad Tajalli
S. Picek
Mauro Conti
AAML
63
5
0
03 Feb 2023
Backdoor Attacks Against Dataset Distillation
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
36
28
0
03 Jan 2023
Rickrolling the Artist: Injecting Backdoors into Text Encoders for
  Text-to-Image Synthesis
Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis
Lukas Struppek
Dominik Hintersdorf
Kristian Kersting
SILM
22
36
0
04 Nov 2022
Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks
  via Motifs
Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
Haibin Zheng
Haiyang Xiong
Jinyin Chen
Hao-Shang Ma
Guohan Huang
47
28
0
25 Oct 2022
Defending Against Backdoor Attack on Graph Nerual Network by
  Explainability
Defending Against Backdoor Attack on Graph Nerual Network by Explainability
B. Jiang
Zhao Li
AAML
GNN
61
16
0
07 Sep 2022
Transferable Graph Backdoor Attack
Transferable Graph Backdoor Attack
Shuiqiao Yang
Bao Gia Doan
Paul Montague
O. Vel
Tamas Abraham
S. Çamtepe
D. Ranasinghe
S. Kanhere
AAML
34
36
0
21 Jun 2022
A Survey of Trustworthy Graph Learning: Reliability, Explainability, and
  Privacy Protection
A Survey of Trustworthy Graph Learning: Reliability, Explainability, and Privacy Protection
Bingzhe Wu
Jintang Li
Junchi Yu
Yatao Bian
Hengtong Zhang
...
Guangyu Sun
Peng Cui
Zibin Zheng
Zhe Liu
P. Zhao
OOD
37
25
0
20 May 2022
Trustworthy Graph Neural Networks: Aspects, Methods and Trends
Trustworthy Graph Neural Networks: Aspects, Methods and Trends
He Zhang
Bang Wu
Xingliang Yuan
Shirui Pan
Hanghang Tong
Jian Pei
45
104
0
16 May 2022
Bandits for Structure Perturbation-based Black-box Attacks to Graph
  Neural Networks with Theoretical Guarantees
Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees
Binghui Wang
Youqin Li
Pan Zhou
AAML
28
13
0
07 May 2022
Reinforcement learning on graphs: A survey
Reinforcement learning on graphs: A survey
Mingshuo Nie
Dongming Chen
Dongqi Wang
33
45
0
13 Apr 2022
Projective Ranking-based GNN Evasion Attacks
Projective Ranking-based GNN Evasion Attacks
He Zhang
Xingliang Yuan
Chuan Zhou
Shirui Pan
AAML
39
23
0
25 Feb 2022
Recent Advances in Reliable Deep Graph Learning: Inherent Noise,
  Distribution Shift, and Adversarial Attack
Recent Advances in Reliable Deep Graph Learning: Inherent Noise, Distribution Shift, and Adversarial Attack
Jintang Li
Bingzhe Wu
Chengbin Hou
Guoji Fu
Yatao Bian
Liang Chen
Junzhou Huang
Zibin Zheng
OOD
AAML
32
6
0
15 Feb 2022
More is Better (Mostly): On the Backdoor Attacks in Federated Graph
  Neural Networks
More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks
Jing Xu
Rui Wang
Stefanos Koffas
K. Liang
S. Picek
FedML
AAML
36
25
0
07 Feb 2022
Neighboring Backdoor Attacks on Graph Convolutional Network
Neighboring Backdoor Attacks on Graph Convolutional Network
Liang Chen
Qibiao Peng
Jintang Li
Yang Liu
Jiawei Chen
Yong Li
Zibin Zheng
GNN
AAML
32
11
0
17 Jan 2022
Model Stealing Attacks Against Inductive Graph Neural Networks
Model Stealing Attacks Against Inductive Graph Neural Networks
Yun Shen
Xinlei He
Yufei Han
Yang Zhang
19
60
0
15 Dec 2021
Safe Distillation Box
Safe Distillation Box
Jingwen Ye
Yining Mao
Jie Song
Xinchao Wang
Cheng Jin
Xiuming Zhang
AAML
21
13
0
05 Dec 2021
Anomaly Localization in Model Gradients Under Backdoor Attacks Against
  Federated Learning
Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated Learning
Z. Bilgin
FedML
AAML
24
1
0
29 Nov 2021
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural
  Networks
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
Xiangyu Qi
Tinghao Xie
Ruizhe Pan
Jifeng Zhu
Yong-Liang Yang
Kai Bu
AAML
27
57
0
25 Nov 2021
Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction
Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction
Jinyin Chen
Haiyang Xiong
Haibin Zheng
Jian Zhang
Guodong Jiang
Yi Liu
AAML
SILM
AI4CE
48
10
0
08 Oct 2021
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
J. Breier
Xiaolu Hou
Martín Ochoa
Jesus Solano
SILM
AAML
39
8
0
23 Sep 2021
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain
Hasan Hammoud
Guohao Li
AAML
18
13
0
12 Sep 2021
A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
Jiaming Mu
Binghui Wang
Qi Li
Kun Sun
Mingwei Xu
Zhuotao Liu
AAML
23
33
0
21 Aug 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised
  Learning
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILM
SSL
24
151
0
01 Aug 2021
A Comprehensive Survey on Graph Anomaly Detection with Deep Learning
A Comprehensive Survey on Graph Anomaly Detection with Deep Learning
Xiaoxiao Ma
Jia Wu
Shan Xue
Jian Yang
Chuan Zhou
Quan Z. Sheng
Hui Xiong
Leman Akoglu
GNN
AI4TS
37
538
0
14 Jun 2021
Hidden Backdoors in Human-Centric Language Models
Hidden Backdoors in Human-Centric Language Models
Shaofeng Li
Hui Liu
Tian Dong
Benjamin Zi Hao Zhao
Minhui Xue
Haojin Zhu
Jialiang Lu
SILM
27
143
0
01 May 2021
Explainability-based Backdoor Attacks Against Graph Neural Networks
Explainability-based Backdoor Attacks Against Graph Neural Networks
Jing Xu
Minhui Xue
Xue
S. Picek
23
74
0
08 Apr 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
27
8
0
16 Mar 2021
GraphAttacker: A General Multi-Task GraphAttack Framework
GraphAttacker: A General Multi-Task GraphAttack Framework
Jinyin Chen
Dunjie Zhang
Zhaoyan Ming
Kejie Huang
Wenrong Jiang
Chen Cui
AAML
36
14
0
18 Jan 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
18
270
0
18 Dec 2020
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks
  using Data Augmentation
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation
Han Qiu
Yi Zeng
Shangwei Guo
Tianwei Zhang
Meikang Qiu
B. Thuraisingham
AAML
24
191
0
13 Dec 2020
Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly
  Detection
Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly Detection
Hao Fu
A. Veldanda
Prashanth Krishnamurthy
S. Garg
Farshad Khorrami
AAML
27
14
0
04 Nov 2020
Robust and Verifiable Information Embedding Attacks to Deep Neural
  Networks via Error-Correcting Codes
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
29
5
0
26 Oct 2020
Model Extraction Attacks on Graph Neural Networks: Taxonomy and
  Realization
Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization
Bang Wu
Xiangwen Yang
Shirui Pan
Xingliang Yuan
MIACV
MLAU
55
53
0
24 Oct 2020
Reinforcement Learning-based Black-Box Evasion Attacks to Link
  Prediction in Dynamic Graphs
Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs
Houxiang Fan
Binghui Wang
Pan Zhou
Ang Li
Meng Pang
Zichuan Xu
Cai Fu
H. Li
Yiran Chen
AAML
MLAU
14
16
0
01 Sep 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
220
0
21 Jul 2020
12
Next