Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1712.05526
Cited By
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
15 December 2017
Xinyun Chen
Chang-rui Liu
Bo-wen Li
Kimberly Lu
D. Song
AAML
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning"
50 / 337 papers shown
Title
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks
Mohammad Mohammadi
Jonathan Nöther
Debmalya Mandal
Adish Singla
Goran Radanović
AAML
OffRL
32
9
0
27 Feb 2023
Defending Against Backdoor Attacks by Layer-wise Feature Analysis
N. Jebreel
J. Domingo-Ferrer
Yiming Li
AAML
31
10
0
24 Feb 2023
Harnessing the Speed and Accuracy of Machine Learning to Advance Cybersecurity
Khatoon Mohammed
AAML
20
10
0
24 Feb 2023
Detecting software vulnerabilities using Language Models
Marwan Omar
29
11
0
23 Feb 2023
ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms
Minzhou Pan
Yi Zeng
Lingjuan Lyu
X. Lin
R. Jia
AAML
29
35
0
22 Feb 2023
A Survey of Trustworthy Federated Learning with Perspectives on Security, Robustness, and Privacy
Yifei Zhang
Dun Zeng
Jinglong Luo
Zenglin Xu
Irwin King
FedML
84
47
0
21 Feb 2023
Poisoning Web-Scale Training Datasets is Practical
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Andreas Terzis
Kurt Thomas
Florian Tramèr
SILM
31
182
0
20 Feb 2023
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
44
21
0
19 Feb 2023
RobustNLP: A Technique to Defend NLP Models Against Backdoor Attacks
Marwan Omar
SILM
AAML
25
0
0
18 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
33
20
0
14 Feb 2023
On Function-Coupled Watermarks for Deep Neural Networks
Xiangyu Wen
Yu Li
Weizhen Jiang
Qian-Lan Xu
AAML
23
1
0
08 Feb 2023
TextShield: Beyond Successfully Detecting Adversarial Sentences in Text Classification
Lingfeng Shen
Ze Zhang
Haiyun Jiang
Ying Chen
AAML
41
5
0
03 Feb 2023
SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in Image Classification
Gorka Abad
Jing Xu
Stefanos Koffas
Behrad Tajalli
S. Picek
Mauro Conti
AAML
63
5
0
03 Feb 2023
Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks
Zeyu Qin
Liuyi Yao
Daoyuan Chen
Yaliang Li
Bolin Ding
Minhao Cheng
FedML
38
25
0
03 Feb 2023
Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks
Xiaoyun Xu
Oguzhan Ersoy
S. Picek
AAML
26
2
0
01 Feb 2023
Towards Understanding How Self-training Tolerates Data Backdoor Poisoning
Soumyadeep Pal
Ren Wang
Yuguang Yao
Sijia Liu
45
6
0
20 Jan 2023
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Shuyang Cheng
Guanhong Tao
Yingqi Liu
Shengwei An
Xiangzhe Xu
...
Guangyu Shen
Kaiyuan Zhang
Qiuling Xu
Shiqing Ma
Xiangyu Zhang
AAML
21
14
0
16 Jan 2023
FedDebug: Systematic Debugging for Federated Learning Applications
Waris Gill
A. Anwar
Muhammad Ali Gulzar
FedML
31
11
0
09 Jan 2023
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
H. Aghakhani
Wei Dai
Andre Manoel
Xavier Fernandes
Anant Kharkar
Christopher Kruegel
Giovanni Vigna
David E. Evans
B. Zorn
Robert Sim
SILM
23
33
0
06 Jan 2023
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
42
28
0
03 Jan 2023
Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition
Hasan Hammoud
Shuming Liu
Mohammad Alkhrashi
Fahad Albalawi
Guohao Li
AAML
32
8
0
03 Jan 2023
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Jiaming Zhang
Xingjun Ma
Qiaomin Yi
Jitao Sang
Yugang Jiang
Yaowei Wang
Changsheng Xu
21
24
0
31 Dec 2022
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning
Jianyi Zhang
Fangjiao Zhang
Qichao Jin
Zhiqiang Wang
Xiaodong Lin
X. Hei
AAML
FedML
32
0
0
28 Dec 2022
AI Security for Geoscience and Remote Sensing: Challenges and Future Trends
Yonghao Xu
Tao Bai
Weikang Yu
Shizhen Chang
P. M. Atkinson
Pedram Ghamisi
AAML
38
47
0
19 Dec 2022
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Zeyang Sha
Xinlei He
Pascal Berrang
Mathias Humbert
Yang Zhang
AAML
13
33
0
18 Dec 2022
FairRoad: Achieving Fairness for Recommender Systems with Optimized Antidote Data
Minghong Fang
Jia-Wei Liu
Michinari Momma
Yi Sun
30
4
0
13 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
Membership Inference Attacks Against Semantic Segmentation Models
Tomás Chobola
Dmitrii Usynin
Georgios Kaissis
MIACV
32
6
0
02 Dec 2022
Federated Learning Attacks and Defenses: A Survey
Yao Chen
Yijie Gui
Hong Lin
Wensheng Gan
Yongdong Wu
FedML
44
29
0
27 Nov 2022
Backdoor Cleansing with Unlabeled Data
Lu Pang
Tao Sun
Haibin Ling
Chao Chen
AAML
37
18
0
22 Nov 2022
ESTAS: Effective and Stable Trojan Attacks in Self-supervised Encoders with One Target Unlabelled Sample
Jiaqi Xue
Qiang Lou
AAML
19
8
0
20 Nov 2022
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning
Jinghuai Zhang
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
AAML
35
20
0
15 Nov 2022
Backdoor Attacks for Remote Sensing Data with Wavelet Transform
Nikolaus Drager
Yonghao Xu
Pedram Ghamisi
AAML
23
13
0
15 Nov 2022
Robust Smart Home Face Recognition under Starving Federated Data
Jaechul Roh
Yajun Fang
FedML
CVBM
AAML
21
0
0
10 Nov 2022
Fairness-aware Regression Robust to Adversarial Attacks
Yulu Jin
Lifeng Lai
FaML
OOD
29
4
0
04 Nov 2022
Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis
Lukas Struppek
Dominik Hintersdorf
Kristian Kersting
SILM
22
36
0
04 Nov 2022
AdaChain: A Learned Adaptive Blockchain
Chenyuan Wu
Bhavana Mehta
Mohammad Javad Amiri
Ryan Marcus
B. T. Loo
17
14
0
03 Nov 2022
Dormant Neural Trojans
Feisi Fu
Panagiota Kiourti
Wenchao Li
AAML
30
0
0
02 Nov 2022
BATT: Backdoor Attack with Transformation-based Triggers
Tong Xu
Yiming Li
Yong Jiang
Shutao Xia
AAML
46
14
0
02 Nov 2022
The Perils of Learning From Unlabeled Data: Backdoor Attacks on Semi-supervised Learning
Virat Shejwalkar
Lingjuan Lyu
Amir Houmansadr
AAML
27
10
0
01 Nov 2022
Robustness of Locally Differentially Private Graph Analysis Against Poisoning
Jacob Imola
A. Chowdhury
Kamalika Chaudhuri
AAML
28
6
0
25 Oct 2022
Training set cleansing of backdoor poisoning by self-supervised representation learning
H. Wang
Soroush Karami
Ousmane Amadou Dia
H. Ritter
E. Emamjomeh-Zadeh
J. Chen
Zhen Xiang
D. J. Miller
G. Kesidis
SSL
23
4
0
19 Oct 2022
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Khoa D. Doan
Yingjie Lao
Ping Li
34
40
0
17 Oct 2022
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning?
Yi Zeng
Minzhou Pan
Himanshu Jahagirdar
Ming Jin
Lingjuan Lyu
R. Jia
AAML
39
21
0
12 Oct 2022
Understanding Impacts of Task Similarity on Backdoor Attack and Detection
Di Tang
Rui Zhu
Xiaofeng Wang
Haixu Tang
Yi Chen
AAML
24
5
0
12 Oct 2022
Augmentation Backdoors
J. Rance
Yiren Zhao
Ilia Shumailov
Robert D. Mullins
AAML
SILM
53
7
0
29 Sep 2022
On the Robustness of Random Forest Against Untargeted Data Poisoning: An Ensemble-Based Approach
M. Anisetti
C. Ardagna
Alessandro Balestrucci
Nicola Bena
Ernesto Damiani
C. Yeun
AAML
OOD
32
10
0
28 Sep 2022
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection
Yiming Li
Yang Bai
Yong Jiang
Yong-Liang Yang
Shutao Xia
Bo Li
AAML
50
98
0
27 Sep 2022
CATER: Intellectual Property Protection on Text Generation APIs via Conditional Watermarks
Xuanli He
Qiongkai Xu
Yi Zeng
Lingjuan Lyu
Fangzhao Wu
Jiwei Li
R. Jia
WaLM
188
71
0
19 Sep 2022
Federated Learning based on Defending Against Data Poisoning Attacks in IoT
Jiayin Li
Wenzhong Guo
Xingshuo Han
Jianping Cai
Ximeng Liu
AAML
83
1
0
14 Sep 2022
Previous
1
2
3
4
5
6
7
Next