ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.01272
  4. Cited By
"No Matter What You Do": Purifying GNN Models via Backdoor Unlearning
v1v2 (latest)

"No Matter What You Do": Purifying GNN Models via Backdoor Unlearning

2 October 2024
Jiale Zhang
Chengcheng Zhu
Bosen Rao
Hao Sui
Xiaobing Sun
Bing Chen
Chunyi Zhou
Shouling Ji
    AAML
ArXiv (abs)PDFHTML

Papers citing ""No Matter What You Do": Purifying GNN Models via Backdoor Unlearning"

16 / 16 papers shown
Title
Rethinking the Trigger-injecting Position in Graph Backdoor Attack
Rethinking the Trigger-injecting Position in Graph Backdoor Attack
Jing Xu
Gorka Abad
S. Picek
LLMSVSILM
71
7
0
05 Apr 2023
Unnoticeable Backdoor Attacks on Graph Neural Networks
Unnoticeable Backdoor Attacks on Graph Neural Networks
Enyan Dai
Minhua Lin
Xiang Zhang
Suhang Wang
AAML
102
54
0
11 Feb 2023
Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks
  via Motifs
Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
Haibin Zheng
Haiyang Xiong
Jinyin Chen
Hao-Shang Ma
Guohan Huang
104
31
0
25 Oct 2022
Transferable Graph Backdoor Attack
Transferable Graph Backdoor Attack
Shuiqiao Yang
Bao Gia Doan
Paul Montague
O. Vel
Tamas Abraham
S. Çamtepe
Damith C. Ranasinghe
S. Kanhere
AAML
84
39
0
21 Jun 2022
Neighboring Backdoor Attacks on Graph Convolutional Network
Neighboring Backdoor Attacks on Graph Convolutional Network
Liang Chen
Qibiao Peng
Jintang Li
Yang Liu
Jiawei Chen
Yong Li
Zibin Zheng
GNNAAML
75
11
0
17 Jan 2022
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Yige Li
X. Lyu
Nodens Koren
Lingjuan Lyu
Yue Liu
Xingjun Ma
OnRL
93
336
0
22 Oct 2021
On Explainability of Graph Neural Networks via Subgraph Explorations
On Explainability of Graph Neural Networks via Subgraph Explorations
Hao Yuan
Haiyang Yu
Jie Wang
Kang Li
Shuiwang Ji
FAtt
83
393
0
09 Feb 2021
Model Extraction Attacks on Graph Neural Networks: Taxonomy and
  Realization
Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization
Bang Wu
Xiangwen Yang
Shirui Pan
Lizhen Qu
MIACVMLAU
98
55
0
24 Oct 2020
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Yunfei Liu
Xingjun Ma
James Bailey
Feng Lu
AAML
96
516
0
05 Jul 2020
Graph Backdoor
Graph Backdoor
Zhaohan Xi
Ren Pang
S. Ji
Ting Wang
AI4CEAAML
63
171
0
21 Jun 2020
Backdoor Attacks to Graph Neural Networks
Backdoor Attacks to Graph Neural Networks
Zaixi Zhang
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
GNN
85
219
0
19 Jun 2020
XGNN: Towards Model-Level Explanations of Graph Neural Networks
XGNN: Towards Model-Level Explanations of Graph Neural Networks
Haonan Yuan
Jiliang Tang
Helen Zhou
Shuiwang Ji
88
401
0
03 Jun 2020
Rethinking the Trigger of Backdoor Attack
Rethinking the Trigger of Backdoor Attack
Yiming Li
Tongqing Zhai
Baoyuan Wu
Yong Jiang
Zhifeng Li
Shutao Xia
LLMSV
78
151
0
09 Apr 2020
Explainability Techniques for Graph Convolutional Networks
Explainability Techniques for Graph Convolutional Networks
Federico Baldassarre
Hossein Azizpour
GNNFAtt
178
272
0
31 May 2019
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
Basel Alomair
AAMLSILM
146
1,854
0
15 Dec 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OODFAtt
193
6,027
0
04 Mar 2017
1