ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.06733
  4. Cited By
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain

BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain

22 August 2017
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
    SILM
ArXivPDFHTML

Papers citing "BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain"

50 / 381 papers shown
Title
Mitigating Backdoor Attacks in Federated Learning
Mitigating Backdoor Attacks in Federated Learning
Chen Wu
Xian Yang
Sencun Zhu
P. Mitra
FedML
AAML
28
104
0
28 Oct 2020
Robust and Verifiable Information Embedding Attacks to Deep Neural
  Networks via Error-Correcting Codes
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
35
5
0
26 Oct 2020
A Survey of Machine Learning Techniques in Adversarial Image Forensics
A Survey of Machine Learning Techniques in Adversarial Image Forensics
Ehsan Nowroozi
Ali Dehghantanha
R. Parizi
K. Choo
AAML
25
72
0
19 Oct 2020
Input-Aware Dynamic Backdoor Attack
Input-Aware Dynamic Backdoor Attack
A. Nguyen
Anh Tran
AAML
33
421
0
16 Oct 2020
Poison Attacks against Text Datasets with Conditional Adversarially
  Regularized Autoencoder
Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder
Alvin Chan
Yi Tay
Yew-Soon Ong
Aston Zhang
SILM
23
56
0
06 Oct 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI)
  Interpretability through Neural Backdoors
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
29
93
0
22 Sep 2020
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition
  Systems
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems
Haoliang Li
Yufei Wang
Xiaofei Xie
Yang Liu
Shiqi Wang
Renjie Wan
Lap-Pui Chau
City University of Hong Kong
AAML
21
32
0
15 Sep 2020
Machine Unlearning for Random Forests
Machine Unlearning for Random Forests
Jonathan Brophy
Daniel Lowd
MU
24
159
0
11 Sep 2020
Adversarial Watermarking Transformer: Towards Tracing Text Provenance
  with Data Hiding
Adversarial Watermarking Transformer: Towards Tracing Text Provenance with Data Hiding
Sahar Abdelnabi
Mario Fritz
WaLM
28
143
0
07 Sep 2020
Incorrect by Construction: Fine Tuning Neural Networks for Guaranteed
  Performance on Finite Sets of Examples
Incorrect by Construction: Fine Tuning Neural Networks for Guaranteed Performance on Finite Sets of Examples
I. Papusha
Rosa Wu
Joshua Brulé
Yanni Kouskoulas
D. Genin
Aurora C. Schmidt
16
5
0
03 Aug 2020
Practical Detection of Trojan Neural Networks: Data-Limited and
  Data-Free Cases
Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases
Ren Wang
Gaoyuan Zhang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
Meng Wang
AAML
36
148
0
31 Jul 2020
Membership Leakage in Label-Only Exposures
Membership Leakage in Label-Only Exposures
Zheng Li
Yang Zhang
34
237
0
30 Jul 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
221
0
21 Jul 2020
Odyssey: Creation, Analysis and Detection of Trojan Models
Odyssey: Creation, Analysis and Detection of Trojan Models
Marzieh Edraki
Nazmul Karim
Nazanin Rahnavard
Ajmal Mian
M. Shah
AAML
28
13
0
16 Jul 2020
SLAP: Improving Physical Adversarial Examples with Short-Lived
  Adversarial Perturbations
SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations
Giulio Lovisotto
H.C.M. Turner
Ivo Sluganovic
Martin Strohmeier
Ivan Martinovic
AAML
19
101
0
08 Jul 2020
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Yunfei Liu
Xingjun Ma
James Bailey
Feng Lu
AAML
22
505
0
05 Jul 2020
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural
  Networks
ConFoc: Content-Focus Protection Against Trojan Attacks on Neural Networks
Miguel Villarreal-Vasquez
B. Bhargava
AAML
19
38
0
01 Jul 2020
Subpopulation Data Poisoning Attacks
Subpopulation Data Poisoning Attacks
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
AAML
SILM
24
114
0
24 Jun 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
  Data Poisoning Attacks
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAML
TDI
21
162
0
22 Jun 2020
Backdoor Attacks to Graph Neural Networks
Backdoor Attacks to Graph Neural Networks
Zaixi Zhang
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
GNN
24
213
0
19 Jun 2020
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural
  Networks
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks
Ruixiang Tang
Mengnan Du
Ninghao Liu
Fan Yang
Xia Hu
AAML
23
184
0
15 Jun 2020
PatchGuard: A Provably Robust Defense against Adversarial Patches via
  Small Receptive Fields and Masking
PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking
Chong Xiang
A. Bhagoji
Vikash Sehwag
Prateek Mittal
AAML
30
29
0
17 May 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
46
298
0
08 May 2020
When Machine Unlearning Jeopardizes Privacy
When Machine Unlearning Jeopardizes Privacy
Min Chen
Zhikun Zhang
Tianhao Wang
Michael Backes
Mathias Humbert
Yang Zhang
MIACV
36
218
0
05 May 2020
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved
  Transferability
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability
H. Aghakhani
Dongyu Meng
Yu-Xiang Wang
Christopher Kruegel
Giovanni Vigna
AAML
31
105
0
01 May 2020
Adversarial Attacks and Defenses: An Interpretation Perspective
Adversarial Attacks and Defenses: An Interpretation Perspective
Ninghao Liu
Mengnan Du
Ruocheng Guo
Huan Liu
Xia Hu
AAML
31
8
0
23 Apr 2020
Weight Poisoning Attacks on Pre-trained Models
Weight Poisoning Attacks on Pre-trained Models
Keita Kurita
Paul Michel
Graham Neubig
AAML
SILM
43
434
0
14 Apr 2020
An Overview of Federated Deep Learning Privacy Attacks and Defensive
  Strategies
An Overview of Federated Deep Learning Privacy Attacks and Defensive Strategies
David Enthoven
Zaid Al-Ars
FedML
60
50
0
01 Apr 2020
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural
  Networks
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks
Junfeng Guo
Zelun Kong
Cong Liu
AAML
32
1
0
24 Mar 2020
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Erwin Quiring
Konrad Rieck
AAML
54
70
0
19 Mar 2020
Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement
  Learning-based Traffic Congestion Control Systems
Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement Learning-based Traffic Congestion Control Systems
Yue Wang
Esha Sarkar
Wenqing Li
Michail Maniatakos
Saif Eddin Jabari
AAML
31
60
0
17 Mar 2020
The TrojAI Software Framework: An OpenSource tool for Embedding Trojans
  into Deep Learning Models
The TrojAI Software Framework: An OpenSource tool for Embedding Trojans into Deep Learning Models
Kiran Karra
C. Ashcraft
Neil Fendley
19
35
0
13 Mar 2020
Dynamic Backdoor Attacks Against Machine Learning Models
Dynamic Backdoor Attacks Against Machine Learning Models
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
AAML
51
271
0
07 Mar 2020
Threats to Federated Learning: A Survey
Threats to Federated Learning: A Survey
Lingjuan Lyu
Han Yu
Qiang Yang
FedML
204
436
0
04 Mar 2020
Explanation-Guided Backdoor Poisoning Attacks Against Malware
  Classifiers
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
Giorgio Severi
J. Meyer
Scott E. Coull
Alina Oprea
AAML
SILM
29
18
0
02 Mar 2020
Entangled Watermarks as a Defense against Model Extraction
Entangled Watermarks as a Defense against Model Extraction
Hengrui Jia
Christopher A. Choquette-Choo
Varun Chandrasekaran
Nicolas Papernot
WaLM
AAML
13
218
0
27 Feb 2020
Defending against Backdoor Attack on Deep Neural Networks
Defending against Backdoor Attack on Deep Neural Networks
Kaidi Xu
Sijia Liu
Pin-Yu Chen
Pu Zhao
X. Lin
Xue Lin
AAML
25
47
0
26 Feb 2020
Adversarial Machine Learning -- Industry Perspectives
Adversarial Machine Learning -- Industry Perspectives
Ramnath Kumar
Magnus Nyström
J. Lambert
Andrew Marshall
Mario Goertzel
Andi Comissoneru
Matt Swann
Sharon Xia
AAML
SILM
29
232
0
04 Feb 2020
Label-Consistent Backdoor Attacks
Label-Consistent Backdoor Attacks
Alexander Turner
Dimitris Tsipras
A. Madry
AAML
11
383
0
05 Dec 2019
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Nils Lukas
Yuxuan Zhang
Florian Kerschbaum
MLAU
FedML
AAML
39
145
0
02 Dec 2019
NeuronInspect: Detecting Backdoors in Neural Networks via Output
  Explanations
NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations
Xijie Huang
M. Alzantot
Mani B. Srivastava
AAML
17
103
0
18 Nov 2019
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems
  With Limited Data
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems With Limited Data
Xinyun Chen
Wenxiao Wang
Chris Bender
Yiming Ding
R. Jia
Bo-wen Li
D. Song
AAML
27
107
0
17 Nov 2019
The Threat of Adversarial Attacks on Machine Learning in Network
  Security -- A Survey
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
37
68
0
06 Nov 2019
Defending Neural Backdoors via Generative Distribution Modeling
Defending Neural Backdoors via Generative Distribution Modeling
Ximing Qiao
Yukun Yang
H. Li
AAML
24
183
0
10 Oct 2019
Detecting AI Trojans Using Meta Neural Analysis
Detecting AI Trojans Using Meta Neural Analysis
Xiaojun Xu
Qi Wang
Huichen Li
Nikita Borisov
Carl A. Gunter
Bo-wen Li
43
321
0
08 Oct 2019
Automotive Cybersecurity: Foundations for Next-Generation Vehicles
Automotive Cybersecurity: Foundations for Next-Generation Vehicles
Michele Scalas
Giorgio Giacinto
13
22
0
02 Oct 2019
Hidden Trigger Backdoor Attacks
Hidden Trigger Backdoor Attacks
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
36
613
0
30 Sep 2019
Walling up Backdoors in Intrusion Detection Systems
Walling up Backdoors in Intrusion Detection Systems
Maximilian Bachl
Alexander Hartl
J. Fabini
Tanja Zseby
AAML
19
15
0
17 Sep 2019
Detection of Backdoors in Trained Classifiers Without Access to the
  Training Set
Detection of Backdoors in Trained Classifiers Without Access to the Training Set
Zhen Xiang
David J. Miller
G. Kesidis
AAML
30
23
0
27 Aug 2019
Februus: Input Purification Defense Against Trojan Attacks on Deep
  Neural Network Systems
Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems
Bao Gia Doan
Ehsan Abbasnejad
Damith C. Ranasinghe
AAML
27
66
0
09 Aug 2019
Previous
12345678
Next