Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2003.08904
Cited By
RAB: Provable Robustness Against Backdoor Attacks
19 March 2020
Maurice Weber
Xiaojun Xu
Bojan Karlas
Ce Zhang
Bo-wen Li
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"RAB: Provable Robustness Against Backdoor Attacks"
33 / 33 papers shown
Title
Cert-SSB: Toward Certified Sample-Specific Backdoor Defense
Ting Qiao
Yansen Wang
Xing Liu
Sixing Wu
Jianbing Li
Yiming Li
AAML
SILM
66
0
0
30 Apr 2025
Game-Theoretic Defenses for Robust Conformal Prediction Against Adversarial Attacks in Medical Imaging
Rui Luo
Jie Bao
Zhixin Zhou
Chuangyin Dang
MedIm
AAML
37
5
0
07 Nov 2024
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents
Hanrong Zhang
Jingyuan Huang
Kai Mei
Yifei Yao
Zhenting Wang
Chenlu Zhan
Hongwei Wang
Yongfeng Zhang
AAML
LLMAG
ELM
51
22
0
03 Oct 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
31
0
0
28 May 2024
On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Changjiang Li
Ren Pang
Bochuan Cao
Zhaohan Xi
Jinghui Chen
Shouling Ji
Ting Wang
AAML
38
6
0
14 Dec 2023
SoK: Unintended Interactions among Machine Learning Defenses and Risks
Vasisht Duddu
S. Szyller
Nadarajah Asokan
AAML
47
2
0
07 Dec 2023
Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks
Shijie Liu
Andrew C. Cullen
Paul Montague
S. Erfani
Benjamin I. P. Rubinstein
AAML
26
3
0
15 Aug 2023
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering
Rui Zhu
Di Tang
Siyuan Tang
Guanhong Tao
Shiqing Ma
Xiaofeng Wang
Haixu Tang
DD
23
3
0
29 Jan 2023
BDMMT: Backdoor Sample Detection for Language Models through Model Mutation Testing
Jiali Wei
Ming Fan
Wenjing Jiao
Wuxia Jin
Ting Liu
AAML
29
11
0
25 Jan 2023
On Optimal Learning Under Targeted Data Poisoning
Steve Hanneke
Amin Karbasi
Mohammad Mahmoody
Idan Mehalel
Shay Moran
AAML
FedML
36
7
0
06 Oct 2022
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
Chulin Xie
Yunhui Long
Pin-Yu Chen
Qinbin Li
Arash Nourian
Sanmi Koyejo
Bo Li
FedML
43
13
0
08 Sep 2022
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
39
27
0
14 Aug 2022
COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
Fan Wu
Linyi Li
Chejian Xu
Huan Zhang
B. Kailkhura
K. Kenthapadi
Ding Zhao
Bo-wen Li
AAML
OffRL
32
34
0
16 Mar 2022
On the Effectiveness of Adversarial Training against Backdoor Attacks
Yinghua Gao
Dongxian Wu
Jingfeng Zhang
Guanhao Gan
Shutao Xia
Gang Niu
Masashi Sugiyama
AAML
32
22
0
22 Feb 2022
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Limin Yang
Zhi Chen
Jacopo Cortellazzi
Feargus Pendlebury
Kevin Tu
Fabio Pierazzi
Lorenzo Cavallaro
Gang Wang
AAML
23
36
0
11 Feb 2022
Backdoor Defense via Decoupling the Training Process
Kunzhe Huang
Yiming Li
Baoyuan Wu
Zhan Qin
Kui Ren
AAML
FedML
27
187
0
05 Feb 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Zayd Hammoudeh
Daniel Lowd
TDI
21
28
0
25 Jan 2022
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive Survey
Shangwei Guo
Xu Zhang
Feiyu Yang
Tianwei Zhang
Yan Gan
Tao Xiang
Yang Liu
FedML
31
9
0
19 Dec 2021
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification
Ashwinee Panda
Saeed Mahloujifar
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
FedML
AAML
17
85
0
12 Dec 2021
A General Framework for Defending Against Backdoor Attacks via Influence Graph
Xiaofei Sun
Jiwei Li
Xiaoya Li
Ziyao Wang
Tianwei Zhang
Han Qiu
Fei Wu
Chun Fan
AAML
TDI
24
5
0
29 Nov 2021
Backdoor Pre-trained Models Can Transfer to All
Lujia Shen
S. Ji
Xuhong Zhang
Jinfeng Li
Jing Chen
Jie Shi
Chengfang Fang
Jianwei Yin
Ting Wang
AAML
SILM
31
118
0
30 Oct 2021
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Dongxian Wu
Yisen Wang
AAML
36
275
0
27 Oct 2021
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
38
16
0
20 Sep 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILM
SSL
37
151
0
01 Aug 2021
CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
Chulin Xie
Minghao Chen
Pin-Yu Chen
Bo-wen Li
FedML
36
164
0
15 Jun 2021
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
J. Hayase
Weihao Kong
Raghav Somani
Sewoong Oh
AAML
24
149
0
22 Apr 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
27
270
0
18 Dec 2020
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors
Ren Pang
Zheng-Wei Zhang
Xiangshan Gao
Zhaohan Xi
S. Ji
Peng Cheng
Xiapu Luo
Ting Wang
AAML
29
31
0
16 Dec 2020
Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing
Maurice Weber
Nana Liu
Bo-wen Li
Ce Zhang
Zhikuan Zhao
AAML
37
28
0
21 Sep 2020
SoK: Certified Robustness for Deep Neural Networks
Linyi Li
Tao Xie
Bo-wen Li
AAML
33
128
0
09 Sep 2020
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
45
589
0
17 Jul 2020
Backdoor Attacks to Graph Neural Networks
Zaixi Zhang
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
GNN
24
211
0
19 Jun 2020
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
168
287
0
02 Dec 2018
1