Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1804.00792
Cited By
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
3 April 2018
Ali Shafahi
Wenjie Huang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks"
50 / 260 papers shown
Title
Amplification trojan network: Attack deep neural networks by amplifying their inherent weakness
Zhan Hu
Jun Zhu
Bo Zhang
Xiaolin Hu
AAML
32
2
0
28 May 2023
Re-thinking Data Availablity Attacks Against Deep Neural Networks
Bin Fang
Bo Li
Shuang Wu
Ran Yi
Shouhong Ding
Lizhuang Ma
AAML
35
0
0
18 May 2023
Evil from Within: Machine Learning Backdoors through Hardware Trojans
Alexander Warnecke
Julian Speith
Janka Möller
Konrad Rieck
C. Paar
AAML
26
3
0
17 Apr 2023
Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Ajinkya Tejankar
Maziar Sanjabi
Qifan Wang
Sinong Wang
Hamed Firooz
Hamed Pirsiavash
L Tan
AAML
30
19
0
04 Apr 2023
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling
Ethan Wisdom
Tejas Gokhale
Chaowei Xiao
Yezhou Yang
31
0
0
30 Mar 2023
A Survey on Secure and Private Federated Learning Using Blockchain: Theory and Application in Resource-constrained Computing
Ervin Moore
Ahmed Imteaj
S. Rezapour
M. Amini
38
18
0
24 Mar 2023
AdaptGuard: Defending Against Universal Attacks for Model Adaptation
Lijun Sheng
Jian Liang
Ran He
Zilei Wang
Tien-Ping Tan
AAML
53
5
0
19 Mar 2023
It Is All About Data: A Survey on the Effects of Data on Adversarial Robustness
Peiyu Xiong
Michael W. Tegegn
Jaskeerat Singh Sarin
Shubhraneel Pal
Julia Rubin
SILM
AAML
37
8
0
17 Mar 2023
CUDA: Convolution-based Unlearnable Datasets
Vinu Sankar Sadasivan
Mahdi Soltanolkotabi
S. Feizi
MU
29
25
0
07 Mar 2023
Poisoning Web-Scale Training Datasets is Practical
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Andreas Terzis
Kurt Thomas
Florian Tramèr
SILM
33
182
0
20 Feb 2023
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
49
21
0
19 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
37
20
0
14 Feb 2023
Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks
Xiaoyun Xu
Oguzhan Ersoy
S. Picek
AAML
34
2
0
01 Feb 2023
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering
Rui Zhu
Di Tang
Siyuan Tang
Guanhong Tao
Shiqing Ma
Xiaofeng Wang
Haixu Tang
DD
23
3
0
29 Jan 2023
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
H. Aghakhani
Wei Dai
Andre Manoel
Xavier Fernandes
Anant Kharkar
Christopher Kruegel
Giovanni Vigna
David Evans
B. Zorn
Robert Sim
SILM
31
33
0
06 Jan 2023
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
47
28
0
03 Jan 2023
Analysis of Label-Flip Poisoning Attack on Machine Learning Based Malware Detector
Kshitiz Aryal
Maanak Gupta
Mahmoud Abdelsalam
AAML
26
18
0
03 Jan 2023
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Jiaming Zhang
Xingjun Ma
Qiaomin Yi
Jitao Sang
Yugang Jiang
Yaowei Wang
Changsheng Xu
21
24
0
31 Dec 2022
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning
Jianyi Zhang
Fangjiao Zhang
Qichao Jin
Zhiqiang Wang
Xiaodong Lin
X. Hei
AAML
FedML
38
1
0
28 Dec 2022
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Jimmy Z. Di
Jack Douglas
Jayadev Acharya
Gautam Kamath
Ayush Sekhari
MU
32
44
0
21 Dec 2022
Learned Systems Security
R. Schuster
Jinyi Zhou
Thorsten Eisenhofer
Paul Grubbs
Nicolas Papernot
AAML
19
2
0
20 Dec 2022
FairRoad: Achieving Fairness for Recommender Systems with Optimized Antidote Data
Minghong Fang
Jia-Wei Liu
Michinari Momma
Yi Sun
38
4
0
13 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning
Marissa Connor
Vincent Emanuele
SILM
AAML
33
1
0
05 Dec 2022
ConfounderGAN: Protecting Image Data Privacy with Causal Confounder
Qi Tian
Kun Kuang
Ke Jiang
Furui Liu
Zhihua Wang
Fei Wu
32
7
0
04 Dec 2022
Membership Inference Attacks Against Semantic Segmentation Models
Tomás Chobola
Dmitrii Usynin
Georgios Kaissis
MIACV
37
6
0
02 Dec 2022
Backdoor Cleansing with Unlabeled Data
Lu Pang
Tao Sun
Haibin Ling
Chao Chen
AAML
50
18
0
22 Nov 2022
Analysis and Detectability of Offline Data Poisoning Attacks on Linear Dynamical Systems
Alessio Russo
AAML
11
3
0
16 Nov 2022
Fairness-aware Regression Robust to Adversarial Attacks
Yulu Jin
Lifeng Lai
FaML
OOD
31
4
0
04 Nov 2022
Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis
Lukas Struppek
Dominik Hintersdorf
Kristian Kersting
SILM
24
36
0
04 Nov 2022
Dormant Neural Trojans
Feisi Fu
Panagiota Kiourti
Wenchao Li
AAML
30
0
0
02 Nov 2022
Generative Poisoning Using Random Discriminators
Dirren van Vlijmen
A. Kolmus
Zhuoran Liu
Zhengyu Zhao
Martha Larson
26
2
0
02 Nov 2022
Secure and Trustworthy Artificial Intelligence-Extended Reality (AI-XR) for Metaverses
Adnan Qayyum
M. A. Butt
Hassan Ali
Muhammad Usman
O. Halabi
Ala I. Al-Fuqaha
Q. Abbasi
Muhammad Ali Imran
Junaid Qadir
35
32
0
24 Oct 2022
New data poison attacks on machine learning classifiers for mobile exfiltration
M. A. Ramírez
Sangyoung Yoon
Ernesto Damiani
H. A. Hamadi
C. Ardagna
Nicola Bena
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
33
4
0
20 Oct 2022
Emerging Threats in Deep Learning-Based Autonomous Driving: A Comprehensive Survey
Huiyun Cao
Wenlong Zou
Yinkun Wang
Ting Song
Mengjun Liu
AAML
56
5
0
19 Oct 2022
Towards Generating Adversarial Examples on Mixed-type Data
Han Xu
Menghai Pan
Zhimeng Jiang
Huiyuan Chen
Xiaoting Li
Mahashweta Das
Hao Yang
AAML
SILM
23
0
0
17 Oct 2022
Probabilistic Categorical Adversarial Attack & Adversarial Training
Han Xu
Penghei He
Jie Ren
Yuxuan Wan
Zitao Liu
Hui Liu
Jiliang Tang
AAML
SILM
33
0
0
17 Oct 2022
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Khoa D. Doan
Yingjie Lao
Ping Li
39
40
0
17 Oct 2022
An Embarrassingly Simple Backdoor Attack on Self-supervised Learning
Changjiang Li
Ren Pang
Zhaohan Xi
Tianyu Du
S. Ji
Yuan Yao
Ting Wang
AAML
36
25
0
13 Oct 2022
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning?
Yi Zeng
Minzhou Pan
Himanshu Jahagirdar
Ming Jin
Lingjuan Lyu
R. Jia
AAML
39
21
0
12 Oct 2022
On Optimal Learning Under Targeted Data Poisoning
Steve Hanneke
Amin Karbasi
Mohammad Mahmoody
Idan Mehalel
Shay Moran
AAML
FedML
36
7
0
06 Oct 2022
On the Robustness of Random Forest Against Untargeted Data Poisoning: An Ensemble-Based Approach
M. Anisetti
C. Ardagna
Alessandro Balestrucci
Nicola Bena
Ernesto Damiani
C. Yeun
AAML
OOD
34
10
0
28 Sep 2022
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection
Yiming Li
Yang Bai
Yong Jiang
Yong-Liang Yang
Shutao Xia
Bo Li
AAML
56
98
0
27 Sep 2022
An Adaptive Black-box Defense against Trojan Attacks (TrojDef)
Guanxiong Liu
Abdallah Khreishah
Fatima Sharadgah
Issa M. Khalil
AAML
33
8
0
05 Sep 2022
Data Isotopes for Data Provenance in DNNs
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
25
12
0
29 Aug 2022
Hierarchical Perceptual Noise Injection for Social Media Fingerprint Privacy Protection
Simin Li
Huangxinxin Xu
Jiakai Wang
Aishan Liu
Fazhi He
Xianglong Liu
Dacheng Tao
AAML
28
5
0
23 Aug 2022
RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN
Huy Phan
Cong Shi
Yi Xie
Tian-Di Zhang
Zhuohang Li
Tianming Zhao
Jian-Dong Liu
Yan Wang
Ying-Cong Chen
Bo Yuan
AAML
35
6
0
22 Aug 2022
An anomaly detection approach for backdoored neural networks: face recognition as a case study
A. Unnervik
S´ebastien Marcel
AAML
29
4
0
22 Aug 2022
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
39
27
0
14 Aug 2022
Defense against Backdoor Attacks via Identifying and Purifying Bad Neurons
Mingyuan Fan
Yang Liu
Cen Chen
Ximeng Liu
Wenzhong Guo
AAML
21
4
0
13 Aug 2022
Previous
1
2
3
4
5
6
Next