Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.05897
Cited By
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
15 May 2019
Chen Zhu
Wenjie Huang
Ali Shafahi
Hengduo Li
Gavin Taylor
Christoph Studer
Tom Goldstein
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Transferable Clean-Label Poisoning Attacks on Deep Neural Nets"
18 / 68 papers shown
Title
IdentityDP: Differential Private Identification Protection for Face Images
Yunqian Wen
Li Song
Bo Liu
Ming Ding
Rong Xie
PICV
45
62
0
02 Mar 2021
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release
Liam H. Fowl
Ping Yeh-Chiang
Micah Goldblum
Jonas Geiping
Arpit Bansal
W. Czaja
Tom Goldstein
24
43
0
16 Feb 2021
Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
Guangyu Shen
Yingqi Liu
Guanhong Tao
Shengwei An
Qiuling Xu
Shuyang Cheng
Shiqing Ma
Xinming Zhang
AAML
47
117
0
09 Feb 2021
Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification
Shuyang Cheng
Yingqi Liu
Shiqing Ma
Xinming Zhang
AAML
31
154
0
21 Dec 2020
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
Aleksander Madry
Bo Li
Tom Goldstein
SILM
32
271
0
18 Dec 2020
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors
Ren Pang
Zheng-Wei Zhang
Xiangshan Gao
Zhaohan Xi
S. Ji
Peng Cheng
Xiapu Luo
Ting Wang
AAML
29
31
0
16 Dec 2020
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks
Jinyuan Jia
Yupei Liu
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
40
74
0
07 Dec 2020
How Robust are Randomized Smoothing based Defenses to Data Poisoning?
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Jihun Hamm
OOD
AAML
28
32
0
02 Dec 2020
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems
Haoliang Li
Yufei Wang
Xiaofei Xie
Yang Liu
Shiqi Wang
Renjie Wan
Lap-Pui Chau
City University of Hong Kong
AAML
24
32
0
15 Sep 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
Wenjie Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
21
215
0
04 Sep 2020
Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases
Ren Wang
Gaoyuan Zhang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
Meng Wang
AAML
36
148
0
31 Jul 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
221
0
21 Jul 2020
Data Poisoning Attacks Against Federated Learning Systems
Vale Tolpegin
Stacey Truex
Mehmet Emre Gursoy
Ling Liu
FedML
31
640
0
16 Jul 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAML
TDI
21
162
0
22 Jun 2020
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability
H. Aghakhani
Dongyu Meng
Yu-Xiang Wang
Christopher Kruegel
Giovanni Vigna
AAML
31
105
0
01 May 2020
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Sanghyun Hong
Varun Chandrasekaran
Yigitcan Kaya
Tudor Dumitras
Nicolas Papernot
AAML
28
136
0
26 Feb 2020
Hidden Trigger Backdoor Attacks
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
36
613
0
30 Sep 2019
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
353
5,849
0
08 Jul 2016
Previous
1
2