Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2012.10544
Cited By
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
18 December 2020
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses"
50 / 148 papers shown
Title
Towards Stable Backdoor Purification through Feature Shift Tuning
Rui Min
Zeyu Qin
Li Shen
Minhao Cheng
AAML
40
21
0
03 Oct 2023
Safe and Robust Watermark Injection with a Single OoD Image
Shuyang Yu
Junyuan Hong
Haobo Zhang
Haotao Wang
Zhangyang Wang
Jiayu Zhou
WIGM
38
3
0
04 Sep 2023
BaDExpert: Extracting Backdoor Functionality for Accurate Backdoor Input Detection
Tinghao Xie
Xiangyu Qi
Ping He
Yiming Li
Jiachen T. Wang
Prateek Mittal
AAML
23
9
0
23 Aug 2023
Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks
Shijie Liu
Andrew C. Cullen
Paul Montague
S. Erfani
Benjamin I. P. Rubinstein
AAML
23
3
0
15 Aug 2023
Breaking Speaker Recognition with PaddingBack
Zhe Ye
Diqun Yan
Li Dong
Kailai Shen
AAML
31
2
0
08 Aug 2023
APBench: A Unified Benchmark for Availability Poisoning Attacks and Defenses
Tianrui Qin
Xitong Gao
Juanjuan Zhao
Kejiang Ye
Chengjie Xu
AAML
21
6
0
07 Aug 2023
Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning Attacks
Domenico Cotroneo
Cristina Improta
Pietro Liguori
R. Natella
SILM
30
22
0
04 Aug 2023
Rethinking Backdoor Attacks
Alaa Khaddaj
Guillaume Leclerc
Aleksandar Makelov
Kristian Georgiev
Hadi Salman
Andrew Ilyas
A. Madry
SILM
34
28
0
19 Jul 2023
The Full Landscape of Robust Mean Testing: Sharp Separations between Oblivious and Adaptive Contamination
C. Canonne
Samuel B. Hopkins
Jungshian Li
Allen Liu
Shyam Narayanan
AAML
36
4
0
18 Jul 2023
Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of Sound
Hanbo Cai
Pengcheng Zhang
Hai Dong
Yan Xiao
Stefanos Koffas
Yiming Li
AAML
26
28
0
17 Jul 2023
Performance Scaling via Optimal Transport: Enabling Data Selection from Partially Revealed Sources
Feiyang Kang
H. Just
Anit Kumar Sahu
R. Jia
59
10
0
05 Jul 2023
On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks
Wenxiao Wang
S. Feizi
AAML
24
1
0
28 Jun 2023
Fake the Real: Backdoor Attack on Deep Speech Classification via Voice Conversion
Zhe Ye
Terui Mao
Li Dong
Diqun Yan
AAML
22
7
0
28 Jun 2023
A Comprehensive Study on the Robustness of Image Classification and Object Detection in Remote Sensing: Surveying and Benchmarking
Shaohui Mei
Jiawei Lian
Xiaofei Wang
Yuru Su
Mingyang Ma
Lap-Pui Chau
AAML
23
11
0
21 Jun 2023
Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios
Ziqiang Li
Hong Sun
Pengfei Xia
Heng Li
Beihao Xia
Yi Wu
Bin Li
AAML
24
8
0
14 Jun 2023
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks
Ziqiang Li
Hong Sun
Pengfei Xia
Beihao Xia
Xue Rui
Wei Zhang
Qinglang Guo
Bin Li
AAML
32
8
0
14 Jun 2023
Securing Visually-Aware Recommender Systems: An Adversarial Image Reconstruction and Detection Framework
Minglei Yin
Bing Liu
Neil Zhenqiang Gong
Xin Li
AAML
9
1
0
11 Jun 2023
Detecting Neural Trojans Through Merkle Trees
Joshua Strubel
14
0
0
08 Jun 2023
Don't trust your eyes: on the (un)reliability of feature visualizations
Robert Geirhos
Roland S. Zimmermann
Blair Bilodeau
Wieland Brendel
Been Kim
FAtt
OOD
27
26
0
07 Jun 2023
Exploring Model Dynamics for Accumulative Poisoning Discovery
Jianing Zhu
Xiawei Guo
Jiangchao Yao
Chao Du
Li He
Shuo Yuan
Tongliang Liu
Liang Wang
Bo Han
AAML
18
0
0
06 Jun 2023
Sharpness-Aware Data Poisoning Attack
Pengfei He
Han Xu
J. Ren
Yingqian Cui
Hui Liu
Charu C. Aggarwal
Jiliang Tang
AAML
44
7
0
24 May 2023
Attacks on Online Learners: a Teacher-Student Analysis
R. Margiotta
Sebastian Goldt
G. Sanguinetti
AAML
33
1
0
18 May 2023
Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models
Yihao Huang
Felix Juefei Xu
Qing-Wu Guo
Jie M. Zhang
Yutong Wu
Ming Hu
Tianlin Li
Geguang Pu
Yang Liu
DiffM
11
32
0
18 May 2023
Diffusion Models for Imperceptible and Transferable Adversarial Attack
Jianqi Chen
H. Chen
Keyan Chen
Yilan Zhang
Zhengxia Zou
Z. Shi
DiffM
29
57
0
14 May 2023
Exploring the Landscape of Machine Unlearning: A Comprehensive Survey and Taxonomy
T. Shaik
Xiaohui Tao
Haoran Xie
Lin Li
Xiaofeng Zhu
Qingyuan Li
MU
36
25
0
10 May 2023
A Universal Identity Backdoor Attack against Speaker Verification based on Siamese Network
Haodong Zhao
Wei Du
Junjie Guo
Gongshen Liu
AAML
11
0
0
28 Mar 2023
Boundary Unlearning
Min Chen
Weizhuo Gao
Gaoyang Liu
Kai Peng
Chen Wang
MU
109
71
0
21 Mar 2023
It Is All About Data: A Survey on the Effects of Data on Adversarial Robustness
Peiyu Xiong
Michael W. Tegegn
Jaskeerat Singh Sarin
Shubhraneel Pal
Julia Rubin
SILM
AAML
32
8
0
17 Mar 2023
The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models
H. M. Dolatabadi
S. Erfani
C. Leckie
DiffM
51
17
0
15 Mar 2023
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
39
18
0
07 Mar 2023
An Empirical Study of Pre-Trained Model Reuse in the Hugging Face Deep Learning Model Registry
Wenxin Jiang
Nicholas Synovic
Matt Hyatt
Taylor R. Schorlemmer
R. Sethi
Yung-Hsiang Lu
George K. Thiruvathukal
James C. Davis
30
64
0
05 Mar 2023
Certified Robust Neural Networks: Generalization and Corruption Resistance
Amine Bennouna
Ryan Lucas
Bart P. G. Van Parys
38
10
0
03 Mar 2023
Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges and Future Research Directions
Thuy-Dung Nguyen
Tuan Nguyen
Phi Le Nguyen
Hieu H. Pham
Khoa D. Doan
Kok-Seng Wong
AAML
FedML
37
56
0
03 Mar 2023
Towards Audit Requirements for AI-based Systems in Mobility Applications
Devi Alagarswamy
Christian Berghoff
Vasilios Danos
Fabian Langer
Thora Markert
Georg Schneider
Arndt von Twickel
Fabian Woitschek
15
1
0
27 Feb 2023
Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights?
Ruisi Cai
Zhenyu (Allen) Zhang
Zhangyang Wang
AAML
OOD
33
12
0
24 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
33
20
0
14 Feb 2023
Temporal Robustness against Data Poisoning
Wenxiao Wang
S. Feizi
AAML
OOD
35
11
0
07 Feb 2023
SCALE-UP: An Efficient Black-box Input-level Backdoor Detection via Analyzing Scaled Prediction Consistency
Junfeng Guo
Yiming Li
Xun Chen
Hanqing Guo
Lichao Sun
Cong Liu
AAML
MLAU
21
95
0
07 Feb 2023
Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks
Zeyu Qin
Liuyi Yao
Daoyuan Chen
Yaliang Li
Bolin Ding
Minhao Cheng
FedML
38
25
0
03 Feb 2023
BackdoorBox: A Python Toolbox for Backdoor Learning
Yiming Li
Mengxi Ya
Yang Bai
Yong Jiang
Shutao Xia
AAML
44
40
0
01 Feb 2023
Towards Understanding How Self-training Tolerates Data Backdoor Poisoning
Soumyadeep Pal
Ren Wang
Yuguang Yao
Sijia Liu
45
6
0
20 Jan 2023
Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor Attack
Tzvi Lederer
Gallil Maimon
Lior Rokach
AAML
11
1
0
05 Jan 2023
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
39
28
0
03 Jan 2023
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Jimmy Z. Di
Jack Douglas
Jayadev Acharya
Gautam Kamath
Ayush Sekhari
MU
32
44
0
21 Dec 2022
Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Guanhong Tao
Zhenting Wang
Shuyang Cheng
Shiqing Ma
Shengwei An
Yingqi Liu
Guangyu Shen
Zhuo Zhang
Yunshu Mao
Xiangyu Zhang
SILM
17
17
0
29 Nov 2022
Deep Fake Detection, Deterrence and Response: Challenges and Opportunities
Amin Azmoodeh
Ali Dehghantanha
42
2
0
26 Nov 2022
Rank-One Editing of Encoder-Decoder Models
Vikas Raunak
Arul Menezes
KELM
26
10
0
23 Nov 2022
A Survey on Backdoor Attack and Defense in Natural Language Processing
Xuan Sheng
Zhaoyang Han
Piji Li
Xiangmao Chang
SILM
19
19
0
22 Nov 2022
Untargeted Backdoor Attack against Object Detection
C. Luo
Yiming Li
Yong Jiang
Shutao Xia
AAML
28
31
0
02 Nov 2022
Generative Poisoning Using Random Discriminators
Dirren van Vlijmen
A. Kolmus
Zhuoran Liu
Zhengyu Zhao
Martha Larson
23
2
0
02 Nov 2022
Previous
1
2
3
Next