Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1708.06733
Cited By
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
22 August 2017
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain"
50 / 381 papers shown
Title
Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information
Yi Zeng
Minzhou Pan
H. Just
Lingjuan Lyu
M. Qiu
R. Jia
AAML
36
171
0
11 Apr 2022
WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
Yunjie Ge
Qianqian Wang
Jingfeng Zhang
Juntao Zhou
Yunzhu Zhang
Chao Shen
AAML
24
6
0
25 Mar 2022
Energy-Latency Attacks via Sponge Poisoning
Antonio Emanuele Cinà
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
SILM
57
29
0
14 Mar 2022
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks
Siddhartha Datta
N. Shadbolt
AAML
32
6
0
07 Mar 2022
Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving
Xingshuo Han
Guowen Xu
Yuanpu Zhou
Xuehuan Yang
Jiwei Li
Tianwei Zhang
AAML
32
43
0
02 Mar 2022
On the Effectiveness of Adversarial Training against Backdoor Attacks
Yinghua Gao
Dongxian Wu
Jingfeng Zhang
Guanhao Gan
Shutao Xia
Gang Niu
Masashi Sugiyama
AAML
32
22
0
22 Feb 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
25
37
0
21 Feb 2022
Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Guangyu Shen
Yingqi Liu
Guanhong Tao
Qiuling Xu
Zhuo Zhang
Shengwei An
Shiqing Ma
Xinming Zhang
AAML
23
34
0
11 Feb 2022
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Limin Yang
Zhi Chen
Jacopo Cortellazzi
Feargus Pendlebury
Kevin Tu
Fabio Pierazzi
Lorenzo Cavallaro
Gang Wang
AAML
28
36
0
11 Feb 2022
Identifying Backdoor Attacks in Federated Learning via Anomaly Detection
Yuxi Mi
Yiheng Sun
Jihong Guan
Shuigeng Zhou
AAML
FedML
19
1
0
09 Feb 2022
Datamodels: Predicting Predictions from Training Data
Andrew Ilyas
Sung Min Park
Logan Engstrom
Guillaume Leclerc
A. Madry
TDI
52
131
0
01 Feb 2022
Can Adversarial Training Be Manipulated By Non-Robust Features?
Lue Tao
Lei Feng
Hongxin Wei
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
139
16
0
31 Jan 2022
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire
Siddhartha Datta
N. Shadbolt
AAML
41
7
0
28 Jan 2022
Backdoor Defense with Machine Unlearning
Yang Liu
Mingyuan Fan
Cen Chen
Ximeng Liu
Zhuo Ma
Li Wang
Jianfeng Ma
AAML
30
74
0
24 Jan 2022
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World
Hua Ma
Yinshan Li
Yansong Gao
A. Abuadbba
Zhi-Li Zhang
Anmin Fu
Hyoungshick Kim
S. Al-Sarawi
N. Surya
Derek Abbott
21
34
0
21 Jan 2022
How to Backdoor HyperNetwork in Personalized Federated Learning?
Phung Lai
Nhathai Phan
Issa M. Khalil
Abdallah Khreishah
Xintao Wu
AAML
FedML
33
0
0
18 Jan 2022
Neighboring Backdoor Attacks on Graph Convolutional Network
Liang Chen
Qibiao Peng
Jintang Li
Yang Liu
Jiawei Chen
Yong Li
Zibin Zheng
GNN
AAML
32
11
0
17 Jan 2022
LoMar: A Local Defense Against Poisoning Attack on Federated Learning
Xingyu Li
Zhe Qu
Shangqing Zhao
Bo Tang
Zhuo Lu
Yao-Hong Liu
AAML
41
92
0
08 Jan 2022
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive Survey
Shangwei Guo
Xu Zhang
Feiyu Yang
Tianwei Zhang
Yan Gan
Tao Xiang
Yang Liu
FedML
31
9
0
19 Dec 2021
Batch Label Inference and Replacement Attacks in Black-Boxed Vertical Federated Learning
Yang Liu
Tianyuan Zou
Yan Kang
Wenhan Liu
Yuanqin He
Zhi-qian Yi
Qian Yang
FedML
AAML
19
19
0
10 Dec 2021
SoK: Anti-Facial Recognition Technology
Emily Wenger
Shawn Shan
Haitao Zheng
Ben Y. Zhao
PICV
32
13
0
08 Dec 2021
FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis
Yu Feng
Benteng Ma
Jing Zhang
Shanshan Zhao
Yong-quan Xia
Dacheng Tao
AAML
49
84
0
02 Dec 2021
A General Framework for Defending Against Backdoor Attacks via Influence Graph
Xiaofei Sun
Jiwei Li
Xiaoya Li
Ziyao Wang
Tianwei Zhang
Han Qiu
Fei Wu
Chun Fan
AAML
TDI
24
5
0
29 Nov 2021
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
Xiangyu Qi
Tinghao Xie
Ruizhe Pan
Jifeng Zhu
Yong-Liang Yang
Kai Bu
AAML
35
57
0
25 Nov 2021
Backdoor Attack through Frequency Domain
Tong Wang
Yuan Yao
Feng Xu
Shengwei An
Hanghang Tong
Ting Wang
AAML
26
33
0
22 Nov 2021
TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems
Bao Gia Doan
Minhui Xue
Shiqing Ma
Ehsan Abbasnejad
Damith C. Ranasinghe
AAML
41
53
0
19 Nov 2021
Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation
Mehdi Sadi
B. M. S. Bahar Talukder
Kaniz Mishty
Md. Tauhidur Rahman
AAML
37
0
0
18 Nov 2021
Triggerless Backdoor Attack for NLP Tasks with Clean Labels
Leilei Gan
Jiwei Li
Tianwei Zhang
Xiaoya Li
Yuxian Meng
Fei Wu
Yi Yang
Shangwei Guo
Chun Fan
AAML
SILM
27
74
0
15 Nov 2021
Lightweight machine unlearning in neural network
Kongyang Chen
Yiwen Wang
Yao Huang
MU
28
7
0
10 Nov 2021
Revisiting Methods for Finding Influential Examples
Karthikeyan K
Anders Søgaard
TDI
22
30
0
08 Nov 2021
Get a Model! Model Hijacking Attack Against Machine Learning Models
A. Salem
Michael Backes
Yang Zhang
AAML
25
28
0
08 Nov 2021
Explainable Artificial Intelligence for Smart City Application: A Secure and Trusted Platform
M. Kabir
Khondokar Fida Hasan
M Zahid Hasan
Keyvan Ansari
33
23
0
31 Oct 2021
Backdoor Pre-trained Models Can Transfer to All
Lujia Shen
S. Ji
Xuhong Zhang
Jinfeng Li
Jing Chen
Jie Shi
Chengfang Fang
Jianwei Yin
Ting Wang
AAML
SILM
33
120
0
30 Oct 2021
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis
Junfeng Guo
Ang Li
Cong Liu
AAML
67
74
0
28 Oct 2021
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes
Sanghyun Hong
Michael-Andrei Panaitescu-Liess
Yigitcan Kaya
Tudor Dumitras
MQ
60
13
0
26 Oct 2021
Semantic Host-free Trojan Attack
Haripriya Harikumar
Kien Do
Santu Rana
Sunil R. Gupta
Svetha Venkatesh
25
1
0
26 Oct 2021
CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Zhensu Sun
Xiaoning Du
Fu Song
Mingze Ni
Li Li
36
68
0
25 Oct 2021
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Yige Li
X. Lyu
Nodens Koren
Lingjuan Lyu
Bo-wen Li
Xingjun Ma
OnRL
31
322
0
22 Oct 2021
TESDA: Transform Enabled Statistical Detection of Attacks in Deep Neural Networks
C. Amarnath
Aishwarya H. Balwani
Kwondo Ma
Abhijit Chatterjee
AAML
23
3
0
16 Oct 2021
Bugs in our Pockets: The Risks of Client-Side Scanning
H. Abelson
Ross J. Anderson
S. Bellovin
Josh Benaloh
M. Blaze
...
Ronald L. Rivest
J. Schiller
B. Schneier
Vanessa J. Teague
Carmela Troncoso
69
39
0
14 Oct 2021
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer
Fanchao Qi
Yangyi Chen
Xurui Zhang
Mukai Li
Zhiyuan Liu
Maosong Sun
AAML
SILM
82
175
0
14 Oct 2021
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
97
50
0
13 Oct 2021
Multi-Trigger-Key: Towards Multi-Task Privacy Preserving In Deep Learning
Ren Wang
Zhe Xu
Alfred Hero
32
0
0
06 Oct 2021
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models
Kangjie Chen
Yuxian Meng
Xiaofei Sun
Shangwei Guo
Tianwei Zhang
Jiwei Li
Chun Fan
SILM
34
106
0
06 Oct 2021
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
J. Breier
Xiaolu Hou
Martín Ochoa
Jesus Solano
SILM
AAML
39
8
0
23 Sep 2021
Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis
Zeyuan Yin
Ye Yuan
Panfeng Guo
Pan Zhou
FedML
45
7
0
22 Sep 2021
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
43
16
0
20 Sep 2021
Membership Inference Attacks Against Recommender Systems
Minxing Zhang
Zhaochun Ren
Zihan Wang
Pengjie Ren
Zhumin Chen
Pengfei Hu
Yang Zhang
MIACV
AAML
26
83
0
16 Sep 2021
SanitAIs: Unsupervised Data Augmentation to Sanitize Trojaned Neural Networks
Kiran Karra
C. Ashcraft
Cash Costello
AAML
37
0
0
09 Sep 2021
Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
Linyang Li
Demin Song
Xiaonan Li
Jiehang Zeng
Ruotian Ma
Xipeng Qiu
33
135
0
31 Aug 2021
Previous
1
2
3
4
5
6
7
8
Next