Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.10867
Cited By
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
15 July 2024
Lukas Gosch
Mahalakshmi Sabanayagam
Debarghya Ghoshdastidar
Stephan Günnemann
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks"
23 / 23 papers shown
Title
Certified Robustness to Clean-Label Poisoning Using Diffusion Denoising
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
DiffM
70
4
0
18 Mar 2024
Revisiting Robustness in Graph Machine Learning
Lukas Gosch
Daniel Sturm
Simon Geisler
Stephan Günnemann
AAML
OOD
117
23
0
01 May 2023
Run-Off Election: Improved Provable Defense against Data Poisoning Attacks
Keivan Rezaei
Kiarash Banihashem
Atoosa Malemir Chegini
Soheil Feizi
AAML
52
18
0
05 Feb 2023
Certifying Robustness to Programmable Data Bias in Decision Trees
Anna P. Meyer
Aws Albarghouthi
Loris Dántoni
50
21
0
08 Oct 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
Basel Alomair
Aleksander Madry
Yue Liu
Tom Goldstein
SILM
90
281
0
18 Dec 2020
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks
Jinyuan Jia
Yupei Liu
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
75
74
0
07 Dec 2020
On the linearity of large non-linear models: when and why the tangent kernel is constant
Chaoyue Liu
Libin Zhu
M. Belkin
106
142
0
02 Oct 2020
Simple and Deep Graph Convolutional Networks
Ming Chen
Zhewei Wei
Zengfeng Huang
Bolin Ding
Yaliang Li
GNN
119
1,486
0
04 Jul 2020
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
216
1,846
0
03 Mar 2020
Spam Review Detection with Graph Convolutional Networks
Ao Li
Zhou Qin
Runshi Liu
Yiqun Yang
Dong Li
GNN
64
245
0
22 Aug 2019
Certifiable Robustness and Robust Training for Graph Convolutional Networks
Daniel Zügner
Stephan Günnemann
OffRL
70
162
0
28 Jun 2019
Adversarial Attacks on Graph Neural Networks via Meta Learning
Daniel Zügner
Stephan Günnemann
OOD
AAML
GNN
126
572
0
22 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
152
2,044
0
08 Feb 2019
Predict then Propagate: Graph Neural Networks meet Personalized PageRank
Johannes Klicpera
Aleksandar Bojchevski
Stephan Günnemann
GNN
222
1,688
0
14 Oct 2018
How Powerful are Graph Neural Networks?
Keyulu Xu
Weihua Hu
J. Leskovec
Stefanie Jegelka
GNN
243
7,653
0
01 Oct 2018
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
85
761
0
01 Apr 2018
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
99
633
0
29 Aug 2017
Certified Defenses for Data Poisoning Attacks
Jacob Steinhardt
Pang Wei Koh
Percy Liang
AAML
97
755
0
09 Jun 2017
Inductive Representation Learning on Large Graphs
William L. Hamilton
Z. Ying
J. Leskovec
509
15,247
0
07 Jun 2017
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
213
2,894
0
14 Mar 2017
OptNet: Differentiable Optimization as a Layer in Neural Networks
Brandon Amos
J. Zico Kolter
158
963
0
01 Mar 2017
Semi-Supervised Classification with Graph Convolutional Networks
Thomas Kipf
Max Welling
GNN
SSL
641
29,076
0
09 Sep 2016
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
112
1,590
0
27 Jun 2012
1