ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.06733
  4. Cited By
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain

BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain

22 August 2017
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
    SILM
ArXivPDFHTML

Papers citing "BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain"

50 / 381 papers shown
Title
Quantization Backdoors to Deep Learning Commercial Frameworks
Quantization Backdoors to Deep Learning Commercial Frameworks
Hua Ma
Huming Qiu
Yansong Gao
Zhi-Li Zhang
A. Abuadbba
Minhui Xue
Anmin Fu
Jiliang Zhang
S. Al-Sarawi
Derek Abbott
MQ
38
19
0
20 Aug 2021
PatchCleanser: Certifiably Robust Defense against Adversarial Patches
  for Any Image Classifier
PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier
Chong Xiang
Saeed Mahloujifar
Prateek Mittal
VLM
AAML
24
73
0
20 Aug 2021
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine
  learning-based genomic analysis
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine learning-based genomic analysis
Esha Sarkar
Michail Maniatakos
34
3
0
14 Aug 2021
Privacy-Preserving Machine Learning: Methods, Challenges and Directions
Privacy-Preserving Machine Learning: Methods, Challenges and Directions
Runhua Xu
Nathalie Baracaldo
J. Joshi
32
99
0
10 Aug 2021
The Devil is in the GAN: Backdoor Attacks and Defenses in Deep
  Generative Models
The Devil is in the GAN: Backdoor Attacks and Defenses in Deep Generative Models
Ambrish Rawat
Killian Levacher
M. Sinn
AAML
30
11
0
03 Aug 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised
  Learning
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILM
SSL
44
152
0
01 Aug 2021
Spinning Sequence-to-Sequence Models with Meta-Backdoors
Eugene Bagdasaryan
Vitaly Shmatikov
SILM
AAML
38
8
0
22 Jul 2021
The Threat of Offensive AI to Organizations
The Threat of Offensive AI to Organizations
Yisroel Mirsky
Ambra Demontis
J. Kotak
Ram Shankar
Deng Gelei
Liu Yang
Xinming Zhang
Wenke Lee
Yuval Elovici
Battista Biggio
38
81
0
30 Jun 2021
The Feasibility and Inevitability of Stealth Attacks
The Feasibility and Inevitability of Stealth Attacks
I. Tyukin
D. Higham
Alexander Bastounis
Eliyas Woldegeorgis
Alexander N. Gorban
AAML
32
19
0
26 Jun 2021
Evaluating the Robustness of Trigger Set-Based Watermarks Embedded in
  Deep Neural Networks
Evaluating the Robustness of Trigger Set-Based Watermarks Embedded in Deep Neural Networks
Suyoung Lee
Wonho Song
Suman Jana
M. Cha
Sooel Son
AAML
27
13
0
18 Jun 2021
Accumulative Poisoning Attacks on Real-time Data
Accumulative Poisoning Attacks on Real-time Data
Tianyu Pang
Xiao Yang
Yinpeng Dong
Hang Su
Jun Zhu
34
20
0
18 Jun 2021
Poisoning and Backdooring Contrastive Learning
Poisoning and Backdooring Contrastive Learning
Nicholas Carlini
Andreas Terzis
46
158
0
17 Jun 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks
  Trained from Scratch
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
31
124
0
16 Jun 2021
Machine Learning with Electronic Health Records is vulnerable to
  Backdoor Trigger Attacks
Machine Learning with Electronic Health Records is vulnerable to Backdoor Trigger Attacks
Byunggill Joe
Akshay Mehra
I. Shin
Jihun Hamm
17
9
0
15 Jun 2021
Poisoning Deep Reinforcement Learning Agents with In-Distribution
  Triggers
Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers
C. Ashcraft
Kiran Karra
23
22
0
14 Jun 2021
Topological Detection of Trojaned Neural Networks
Topological Detection of Trojaned Neural Networks
Songzhu Zheng
Yikai Zhang
H. Wagner
Mayank Goswami
Chao Chen
AAML
32
40
0
11 Jun 2021
ModelDiff: Testing-Based DNN Similarity Comparison for Model Reuse
  Detection
ModelDiff: Testing-Based DNN Similarity Comparison for Model Reuse Detection
Yan Liang
Ziqi Zhang
Bingyan Liu
Ziyue Yang
Yunxin Liu
19
53
0
11 Jun 2021
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word
  Substitution
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution
Fanchao Qi
Yuan Yao
Sophia Xu
Zhiyuan Liu
Maosong Sun
SILM
33
126
0
11 Jun 2021
Defending Against Backdoor Attacks in Natural Language Generation
Defending Against Backdoor Attacks in Natural Language Generation
Xiaofei Sun
Xiaoya Li
Yuxian Meng
Xiang Ao
Fei Wu
Jiwei Li
Tianwei Zhang
AAML
SILM
31
47
0
03 Jun 2021
Backdoor Attacks on Self-Supervised Learning
Backdoor Attacks on Self-Supervised Learning
Aniruddha Saha
Ajinkya Tejankar
Soroush Abbasi Koohpayegani
Hamed Pirsiavash
SSL
AAML
27
101
0
21 May 2021
High-Robustness, Low-Transferability Fingerprinting of Neural Networks
High-Robustness, Low-Transferability Fingerprinting of Neural Networks
Siyue Wang
Xiao Wang
Pin-Yu Chen
Pu Zhao
Xue Lin
AAML
40
2
0
14 May 2021
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
Jian Chen
Xuxin Zhang
Rui Zhang
Chen Wang
Ling Liu
AAML
25
86
0
08 May 2021
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
166
68
0
04 May 2021
A Master Key Backdoor for Universal Impersonation Attack against
  DNN-based Face Verification
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification
Wei Guo
B. Tondi
Mauro Barni
AAML
30
19
0
01 May 2021
Stealthy Backdoors as Compression Artifacts
Stealthy Backdoors as Compression Artifacts
Yulong Tian
Fnu Suya
Fengyuan Xu
David Evans
37
22
0
30 Apr 2021
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
J. Hayase
Weihao Kong
Raghav Somani
Sewoong Oh
AAML
29
150
0
22 Apr 2021
Turning Federated Learning Systems Into Covert Channels
Turning Federated Learning Systems Into Covert Channels
Gabriele Costa
Fabio Pinelli
S. Soderi
Gabriele Tolomei
FedML
37
10
0
21 Apr 2021
Manipulating SGD with Data Ordering Attacks
Manipulating SGD with Data Ordering Attacks
Ilia Shumailov
Zakhar Shumaylov
Dmitry Kazhdan
Yiren Zhao
Nicolas Papernot
Murat A. Erdogdu
Ross J. Anderson
AAML
112
91
0
19 Apr 2021
Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective
Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective
Yi Zeng
Won Park
Z. Morley Mao
R. Jia
AAML
6
210
0
07 Apr 2021
Backdoor Attack in the Physical World
Backdoor Attack in the Physical World
Yiming Li
Tongqing Zhai
Yong Jiang
Zhifeng Li
Shutao Xia
32
109
0
06 Apr 2021
PointBA: Towards Backdoor Attacks in 3D Point Cloud
PointBA: Towards Backdoor Attacks in 3D Point Cloud
Xinke Li
Zhirui Chen
Yue Zhao
Zekun Tong
Yabang Zhao
A. Lim
Qiufeng Wang
3DPC
AAML
60
51
0
30 Mar 2021
Black-box Detection of Backdoor Attacks with Limited Information and
  Data
Black-box Detection of Backdoor Attacks with Limited Information and Data
Yinpeng Dong
Xiao Yang
Zhijie Deng
Tianyu Pang
Zihao Xiao
Hang Su
Jun Zhu
AAML
21
113
0
24 Mar 2021
TOP: Backdoor Detection in Neural Networks via Transferability of
  Perturbation
TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation
Todd P. Huster
E. Ekwedike
SILM
36
19
0
18 Mar 2021
T-Miner: A Generative Approach to Defend Against Trojan Attacks on
  DNN-based Text Classification
T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification
A. Azizi
I. A. Tahmid
Asim Waheed
Neal Mangaokar
Jiameng Pu
M. Javed
Chandan K. Reddy
Bimal Viswanath
AAML
25
77
0
07 Mar 2021
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with
  Differentially Private Data Augmentations
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations
Eitan Borgnia
Jonas Geiping
Valeriia Cherepanova
Liam H. Fowl
Arjun Gupta
Amin Ghiasi
Furong Huang
Micah Goldblum
Tom Goldstein
37
46
0
02 Mar 2021
Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
Guangyu Shen
Yingqi Liu
Guanhong Tao
Shengwei An
Qiuling Xu
Shuyang Cheng
Shiqing Ma
Xinming Zhang
AAML
42
117
0
09 Feb 2021
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through
  Neural Payload Injection
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection
Yan Liang
Jiayi Hua
Haoyu Wang
Chunyang Chen
Yunxin Liu
FedML
SILM
86
75
0
18 Jan 2021
DeepiSign: Invisible Fragile Watermark to Protect the Integrityand
  Authenticity of CNN
DeepiSign: Invisible Fragile Watermark to Protect the Integrityand Authenticity of CNN
A. Abuadbba
Hyoungshick Kim
Surya Nepal
22
16
0
12 Jan 2021
DeepPoison: Feature Transfer Based Stealthy Poisoning Attack
DeepPoison: Feature Transfer Based Stealthy Poisoning Attack
Jinyin Chen
Longyuan Zhang
Haibin Zheng
Xueke Wang
Zhaoyan Ming
AAML
39
19
0
06 Jan 2021
Fidel: Reconstructing Private Training Samples from Weight Updates in
  Federated Learning
Fidel: Reconstructing Private Training Samples from Weight Updates in Federated Learning
David Enthoven
Zaid Al-Ars
FedML
65
14
0
01 Jan 2021
Deep Feature Space Trojan Attack of Neural Networks by Controlled
  Detoxification
Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification
Shuyang Cheng
Yingqi Liu
Shiqing Ma
Xinming Zhang
AAML
31
154
0
21 Dec 2020
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
32
271
0
18 Dec 2020
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural
  Backdoors
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors
Ren Pang
Zheng-Wei Zhang
Xiangshan Gao
Zhaohan Xi
S. Ji
Peng Cheng
Xiapu Luo
Ting Wang
AAML
29
31
0
16 Dec 2020
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks
  using Data Augmentation
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation
Han Qiu
Yi Zeng
Shangwei Guo
Tianwei Zhang
Meikang Qiu
B. Thuraisingham
AAML
24
191
0
13 Dec 2020
Semantically Robust Unpaired Image Translation for Data with Unmatched
  Semantics Statistics
Semantically Robust Unpaired Image Translation for Data with Unmatched Semantics Statistics
Zhiwei Jia
Bodi Yuan
Kangkang Wang
Hong Wu
David Clifford
Zhiqiang Yuan
Hao Su
VLM
44
21
0
09 Dec 2020
Certified Robustness of Nearest Neighbors against Data Poisoning and
  Backdoor Attacks
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks
Jinyuan Jia
Yupei Liu
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
40
74
0
07 Dec 2020
Privacy and Robustness in Federated Learning: Attacks and Defenses
Privacy and Robustness in Federated Learning: Attacks and Defenses
Lingjuan Lyu
Han Yu
Xingjun Ma
Chen Chen
Lichao Sun
Jun Zhao
Qiang Yang
Philip S. Yu
FedML
183
357
0
07 Dec 2020
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
Fanchao Qi
Yangyi Chen
Mukai Li
Yuan Yao
Zhiyuan Liu
Maosong Sun
AAML
45
266
0
20 Nov 2020
Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush
  Deep Neural Network in Multi-Tenant FPGA
Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA
Adnan Siraj Rakin
Yukui Luo
Xiaolin Xu
Deliang Fan
AAML
25
49
0
05 Nov 2020
Being Single Has Benefits. Instance Poisoning to Deceive Malware
  Classifiers
Being Single Has Benefits. Instance Poisoning to Deceive Malware Classifiers
T. Shapira
David Berend
Ishai Rosenberg
Yang Liu
A. Shabtai
Yuval Elovici
AAML
27
4
0
30 Oct 2020
Previous
12345678
Next