ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.14147
  4. Cited By
A Unified Framework for Data Poisoning Attack to Graph-based
  Semi-supervised Learning

A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning

30 October 2019
Xuanqing Liu
Si Si
Xiaojin Zhu
Yang Li
Cho-Jui Hsieh
    AAML
ArXivPDFHTML

Papers citing "A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning"

11 / 11 papers shown
Title
RIDA: A Robust Attack Framework on Incomplete Graphs
RIDA: A Robust Attack Framework on Incomplete Graphs
Jianke Yu
Hanchen Wang
Chen Chen
Xiaoyang Wang
Wenjie Zhang
Ying Zhang
Ying Zhang
Xijuan Liu
GNN
OOD
AAML
41
1
0
25 Jul 2024
Everything Perturbed All at Once: Enabling Differentiable Graph Attacks
Everything Perturbed All at Once: Enabling Differentiable Graph Attacks
Haoran Liu
Bokun Wang
Jianling Wang
Xiangjue Dong
Tianbao Yang
James Caverlee
AAML
34
3
0
29 Aug 2023
Analysis of Label-Flip Poisoning Attack on Machine Learning Based
  Malware Detector
Analysis of Label-Flip Poisoning Attack on Machine Learning Based Malware Detector
Kshitiz Aryal
Maanak Gupta
Mahmoud Abdelsalam
AAML
18
18
0
03 Jan 2023
Rethinking Backdoor Data Poisoning Attacks in the Context of
  Semi-Supervised Learning
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning
Marissa Connor
Vincent Emanuele
SILM
AAML
14
1
0
05 Dec 2022
Model Inversion Attacks against Graph Neural Networks
Model Inversion Attacks against Graph Neural Networks
Zaixin Zhang
Qi Liu
Zhenya Huang
Hao Wang
Cheekong Lee
Enhong
AAML
23
35
0
16 Sep 2022
Task and Model Agnostic Adversarial Attack on Graph Neural Networks
Task and Model Agnostic Adversarial Attack on Graph Neural Networks
Kartik Sharma
S. Verma
Sourav Medya
Arnab Bhattacharya
Sayan Ranu
AAML
21
8
0
25 Dec 2021
A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
Jiaming Mu
Binghui Wang
Qi Li
Kun Sun
Mingwei Xu
Zhuotao Liu
AAML
23
33
0
21 Aug 2021
Poisoning and Backdooring Contrastive Learning
Poisoning and Backdooring Contrastive Learning
Nicholas Carlini
Andreas Terzis
41
156
0
17 Jun 2021
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
149
68
0
04 May 2021
Influence-Driven Data Poisoning in Graph-Based Semi-Supervised
  Classifiers
Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers
Adriano Franci
Maxime Cordy
Martin Gubri
Mike Papadakis
Yves Le Traon
AAML
18
6
0
14 Dec 2020
GNNGuard: Defending Graph Neural Networks against Adversarial Attacks
GNNGuard: Defending Graph Neural Networks against Adversarial Attacks
Xiang Zhang
Marinka Zitnik
AAML
27
287
0
15 Jun 2020
1