Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1712.05526
Cited By
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
15 December 2017
Xinyun Chen
Chang-rui Liu
Bo-wen Li
Kimberly Lu
D. Song
AAML
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning"
50 / 337 papers shown
Title
Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks
Mingfu Xue
S. Ni
Ying-Chang Wu
Yushu Zhang
Jian Wang
Weiqiang Liu
AAML
32
13
0
31 Jan 2022
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire
Siddhartha Datta
N. Shadbolt
AAML
32
7
0
28 Jan 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Zayd Hammoudeh
Daniel Lowd
TDI
21
28
0
25 Jan 2022
Hiding Behind Backdoors: Self-Obfuscation Against Generative Models
Siddhartha Datta
N. Shadbolt
SILM
AAML
AI4CE
25
2
0
24 Jan 2022
FedComm: Federated Learning as a Medium for Covert Communication
Dorjan Hitaj
Giulio Pagnotta
Briland Hitaj
Fernando Perez-Cruz
L. Mancini
FedML
32
10
0
21 Jan 2022
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World
Hua Ma
Yinshan Li
Yansong Gao
A. Abuadbba
Zhi-Li Zhang
Anmin Fu
Hyoungshick Kim
S. Al-Sarawi
N. Surya
Derek Abbott
21
34
0
21 Jan 2022
Post-Training Detection of Backdoor Attacks for Two-Class and Multi-Attack Scenarios
Zhen Xiang
David J. Miller
G. Kesidis
AAML
28
47
0
20 Jan 2022
Towards Adversarial Evaluations for Inexact Machine Unlearning
Shashwat Goel
Ameya Prabhu
Amartya Sanyal
Ser-Nam Lim
Philip Torr
Ponnurangam Kumaraguru
AAML
ELM
MU
29
47
0
17 Jan 2022
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification
Ashwinee Panda
Saeed Mahloujifar
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
FedML
AAML
14
84
0
12 Dec 2021
SoK: Anti-Facial Recognition Technology
Emily Wenger
Shawn Shan
Haitao Zheng
Ben Y. Zhao
PICV
32
13
0
08 Dec 2021
SHAPr: An Efficient and Versatile Membership Privacy Risk Metric for Machine Learning
Vasisht Duddu
S. Szyller
Nadarajah Asokan
29
12
0
04 Dec 2021
FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis
Yu Feng
Benteng Ma
Jing Zhang
Shanshan Zhao
Yong-quan Xia
Dacheng Tao
AAML
45
84
0
02 Dec 2021
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Hui Sun
Tianqing Zhu
Zhiqiu Zhang
Dawei Jin
Wanlei Zhou
AAML
37
42
0
01 Dec 2021
The Impact of Data Distribution on Fairness and Robustness in Federated Learning
Mustafa Safa Ozdayi
Murat Kantarcioglu
FedML
OOD
21
4
0
29 Nov 2021
Going Grayscale: The Road to Understanding and Improving Unlearnable Examples
Zhuoran Liu
Zhengyu Zhao
A. Kolmus
Tijn Berns
Twan van Laarhoven
Tom Heskes
Martha Larson
AAML
39
6
0
25 Nov 2021
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
Xiangyu Qi
Tinghao Xie
Ruizhe Pan
Jifeng Zhu
Yong-Liang Yang
Kai Bu
AAML
33
57
0
25 Nov 2021
Backdoor Attack through Frequency Domain
Tong Wang
Yuan Yao
Feng Xu
Shengwei An
Hanghang Tong
Ting Wang
AAML
24
33
0
22 Nov 2021
TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems
Bao Gia Doan
Minhui Xue
Shiqing Ma
Ehsan Abbasnejad
D. Ranasinghe
AAML
41
53
0
19 Nov 2021
Revisiting Methods for Finding Influential Examples
Karthikeyan K
Anders Søgaard
TDI
14
30
0
08 Nov 2021
LTD: Low Temperature Distillation for Robust Adversarial Training
Erh-Chung Chen
Che-Rung Lee
AAML
24
26
0
03 Nov 2021
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis
Junfeng Guo
Ang Li
Cong Liu
AAML
59
74
0
28 Oct 2021
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Dongxian Wu
Yisen Wang
AAML
22
275
0
27 Oct 2021
Semantic Host-free Trojan Attack
Haripriya Harikumar
Kien Do
Santu Rana
Sunil R. Gupta
Svetha Venkatesh
19
1
0
26 Oct 2021
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Yige Li
X. Lyu
Nodens Koren
Lingjuan Lyu
Bo-wen Li
Xingjun Ma
OnRL
31
321
0
22 Oct 2021
Detecting Backdoor Attacks Against Point Cloud Classifiers
Zhen Xiang
David J. Miller
Siheng Chen
Xi Li
G. Kesidis
3DPC
AAML
40
15
0
20 Oct 2021
Multi-Trigger-Key: Towards Multi-Task Privacy Preserving In Deep Learning
Ren Wang
Zhe Xu
Alfred Hero
27
0
0
06 Oct 2021
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models
Kangjie Chen
Yuxian Meng
Xiaofei Sun
Shangwei Guo
Tianwei Zhang
Jiwei Li
Chun Fan
SILM
25
106
0
06 Oct 2021
Enhancing Model Robustness and Fairness with Causality: A Regularization Approach
Zhao Wang
Kai Shu
A. Culotta
OOD
21
14
0
03 Oct 2021
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
J. Breier
Xiaolu Hou
Martín Ochoa
Jesus Solano
SILM
AAML
39
8
0
23 Sep 2021
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
38
16
0
20 Sep 2021
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
Neil G. Marchant
Benjamin I. P. Rubinstein
Scott Alfeld
MU
AAML
22
69
0
17 Sep 2021
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain
Hasan Hammoud
Guohao Li
AAML
18
13
0
12 Sep 2021
Adversarial Parameter Defense by Multi-Step Risk Minimization
Zhiyuan Zhang
Ruixuan Luo
Xuancheng Ren
Qi Su
Liangyou Li
Xu Sun
AAML
25
6
0
07 Sep 2021
How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data
Zhiyuan Zhang
Lingjuan Lyu
Weiqiang Wang
Lichao Sun
Xu Sun
21
35
0
03 Sep 2021
Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
Linyang Li
Demin Song
Xiaonan Li
Jiehang Zeng
Ruotian Ma
Xipeng Qiu
22
134
0
31 Aug 2021
Quantization Backdoors to Deep Learning Commercial Frameworks
Hua Ma
Huming Qiu
Yansong Gao
Zhi-Li Zhang
A. Abuadbba
Minhui Xue
Anmin Fu
Jiliang Zhang
S. Al-Sarawi
Derek Abbott
MQ
38
19
0
20 Aug 2021
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine learning-based genomic analysis
Esha Sarkar
Michail Maniatakos
32
3
0
14 Aug 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILM
SSL
26
151
0
01 Aug 2021
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers
Stefanos Koffas
Jing Xu
Mauro Conti
S. Picek
AAML
19
66
0
30 Jul 2021
Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Jihun Hamm
AAML
30
22
0
08 Jul 2021
Data Poisoning Won't Save You From Facial Recognition
Evani Radiya-Dixit
Sanghyun Hong
Nicholas Carlini
Florian Tramèr
AAML
PICV
15
57
0
28 Jun 2021
Adversarial Examples Make Strong Poisons
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
26
132
0
21 Jun 2021
Poisoning and Backdooring Contrastive Learning
Nicholas Carlini
Andreas Terzis
41
156
0
17 Jun 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
31
123
0
16 Jun 2021
Machine Learning with Electronic Health Records is vulnerable to Backdoor Trigger Attacks
Byunggill Joe
Akshay Mehra
I. Shin
Jihun Hamm
9
9
0
15 Jun 2021
Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers
C. Ashcraft
Kiran Karra
17
22
0
14 Jun 2021
Topological Detection of Trojaned Neural Networks
Songzhu Zheng
Yikai Zhang
H. Wagner
Mayank Goswami
Chao Chen
AAML
29
40
0
11 Jun 2021
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution
Fanchao Qi
Yuan Yao
Sophia Xu
Zhiyuan Liu
Maosong Sun
SILM
22
126
0
11 Jun 2021
Defending Against Backdoor Attacks in Natural Language Generation
Xiaofei Sun
Xiaoya Li
Yuxian Meng
Xiang Ao
Fei Wu
Jiwei Li
Tianwei Zhang
AAML
SILM
31
47
0
03 Jun 2021
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers
Xi Li
David J. Miller
Zhen Xiang
G. Kesidis
AAML
16
0
0
28 May 2021
Previous
1
2
3
4
5
6
7
Next