ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.06733
  4. Cited By
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain

BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain

22 August 2017
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
    SILM
ArXivPDFHTML

Papers citing "BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain"

31 / 381 papers shown
Title
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Soheil Kolouri
Aniruddha Saha
Hamed Pirsiavash
Heiko Hoffmann
AAML
33
231
0
26 Jun 2019
Poisoning Attacks with Generative Adversarial Nets
Poisoning Attacks with Generative Adversarial Nets
Luis Muñoz-González
Bjarne Pfitzner
Matteo Russo
Javier Carnerero-Cano
Emil C. Lupu
AAML
21
63
0
18 Jun 2019
Effectiveness of Distillation Attack and Countermeasure on Neural
  Network Watermarking
Effectiveness of Distillation Attack and Countermeasure on Neural Network Watermarking
Ziqi Yang
Hung Dang
E. Chang
AAML
27
34
0
14 Jun 2019
Lower Bounds for Adversarially Robust PAC Learning
Lower Bounds for Adversarially Robust PAC Learning
Dimitrios I. Diochnos
Saeed Mahloujifar
Mohammad Mahmoody
AAML
27
26
0
13 Jun 2019
Bypassing Backdoor Detection Algorithms in Deep Learning
Bypassing Backdoor Detection Algorithms in Deep Learning
T. Tan
Reza Shokri
FedML
AAML
39
149
0
31 May 2019
A backdoor attack against LSTM-based text classification systems
A backdoor attack against LSTM-based text classification systems
Jiazhu Dai
Chuanshuai Chen
SILM
8
320
0
29 May 2019
Biometric Backdoors: A Poisoning Attack Against Unsupervised Template
  Updating
Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating
Giulio Lovisotto
Simon Eberz
Ivan Martinovic
AAML
21
35
0
22 May 2019
Learning to Confuse: Generating Training Time Adversarial Data with
  Auto-Encoder
Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder
Ji Feng
Qi-Zhi Cai
Zhi-Hua Zhou
AAML
19
104
0
22 May 2019
Assuring the Machine Learning Lifecycle: Desiderata, Methods, and
  Challenges
Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges
Rob Ashmore
R. Calinescu
Colin Paterson
AI4TS
32
116
0
10 May 2019
Design of intentional backdoors in sequential models
Design of intentional backdoors in sequential models
Zhaoyuan Yang
N. Iyer
Johan Reimann
Nurali Virani
SILM
AAML
17
38
0
26 Feb 2019
Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment
Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment
Ziqi Yang
E. Chang
Zhenkai Liang
MLAU
33
60
0
22 Feb 2019
A new Backdoor Attack in CNNs by training set corruption without label
  poisoning
A new Backdoor Attack in CNNs by training set corruption without label poisoning
Mauro Barni
Kassem Kallas
B. Tondi
AAML
35
348
0
12 Feb 2019
RED-Attack: Resource Efficient Decision based Attack for Machine
  Learning
RED-Attack: Resource Efficient Decision based Attack for Machine Learning
Faiq Khalid
Hassan Ali
Muhammad Abdullah Hanif
Semeen Rehman
Rehan Ahmed
Mohamed Bennai
AAML
31
14
0
29 Jan 2019
Programmable Neural Network Trojan for Pre-Trained Feature Extractor
Programmable Neural Network Trojan for Pre-Trained Feature Extractor
Yu Ji
Zixin Liu
Xing Hu
Peiqi Wang
Youhui Zhang
AAML
13
17
0
23 Jan 2019
Contamination Attacks and Mitigation in Multi-Party Machine Learning
Contamination Attacks and Mitigation in Multi-Party Machine Learning
Jamie Hayes
O. Ohrimenko
AAML
FedML
25
74
0
08 Jan 2019
Backdooring Convolutional Neural Networks via Targeted Weight
  Perturbations
Backdooring Convolutional Neural Networks via Targeted Weight Perturbations
Jacob Dumford
Walter J. Scheirer
AAML
22
117
0
07 Dec 2018
Model-Reuse Attacks on Deep Learning Systems
Model-Reuse Attacks on Deep Learning Systems
Yujie Ji
Xinyang Zhang
S. Ji
Xiapu Luo
Ting Wang
SILM
AAML
136
186
0
02 Dec 2018
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
182
289
0
02 Dec 2018
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
191
1,034
0
29 Nov 2018
Detecting Backdoor Attacks on Deep Neural Networks by Activation
  Clustering
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
Bryant Chen
Wilka Carvalho
Wenjie Li
Heiko Ludwig
Benjamin Edwards
Chengyao Chen
Ziqiang Cao
Biplav Srivastava
AAML
29
782
0
09 Nov 2018
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Pang Wei Koh
Jacob Steinhardt
Percy Liang
6
240
0
02 Nov 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of
  Evasion and Poisoning Attacks
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILM
AAML
19
11
0
08 Sep 2018
Mitigating Sybils in Federated Learning Poisoning
Mitigating Sybils in Federated Learning Poisoning
Clement Fung
Chris J. M. Yoon
Ivan Beschastnikh
AAML
15
497
0
14 Aug 2018
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural
  Networks
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
Kang Liu
Brendan Dolan-Gavitt
S. Garg
AAML
24
1,021
0
30 May 2018
Technical Report: When Does Machine Learning FAIL? Generalized
  Transferability for Evasion and Poisoning Attacks
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
AAML
40
286
0
19 Mar 2018
Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with
  Adversarial Examples
Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples
Minhao Cheng
Jinfeng Yi
Pin-Yu Chen
Huan Zhang
Cho-Jui Hsieh
SILM
AAML
54
242
0
03 Mar 2018
Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks
  by Backdooring
Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
Yossi Adi
Carsten Baum
Moustapha Cissé
Benny Pinkas
Joseph Keshet
20
669
0
13 Feb 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Bo-wen Li
Kimberly Lu
D. Song
AAML
SILM
44
1,808
0
15 Dec 2017
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
40
1,391
0
08 Dec 2017
Hardening Quantum Machine Learning Against Adversaries
Hardening Quantum Machine Learning Against Adversaries
N. Wiebe
Ramnath Kumar
AAML
25
20
0
17 Nov 2017
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial
  Attacks with Moving Target Defense
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense
Sailik Sengupta
Tathagata Chakraborti
S. Kambhampati
AAML
29
63
0
19 May 2017
Previous
12345678