ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.05526
  4. Cited By
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

15 December 2017
Xinyun Chen
Chang-rui Liu
Bo-wen Li
Kimberly Lu
D. Song
    AAML
    SILM
ArXivPDFHTML

Papers citing "Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning"

37 / 387 papers shown
Title
Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement
  Learning-based Traffic Congestion Control Systems
Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement Learning-based Traffic Congestion Control Systems
Yue Wang
Esha Sarkar
Wenqing Li
Michail Maniatakos
Saif Eddin Jabari
AAML
23
60
0
17 Mar 2020
Towards Probabilistic Verification of Machine Unlearning
Towards Probabilistic Verification of Machine Unlearning
David M. Sommer
Liwei Song
Sameer Wagh
Prateek Mittal
AAML
13
71
0
09 Mar 2020
Explanation-Guided Backdoor Poisoning Attacks Against Malware
  Classifiers
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
Giorgio Severi
J. Meyer
Scott E. Coull
Alina Oprea
AAML
SILM
29
18
0
02 Mar 2020
NNoculation: Catching BadNets in the Wild
NNoculation: Catching BadNets in the Wild
A. Veldanda
Kang Liu
Benjamin Tan
Prashanth Krishnamurthy
Farshad Khorrami
Ramesh Karri
Brendan Dolan-Gavitt
S. Garg
AAML
OnRL
11
20
0
19 Feb 2020
Radioactive data: tracing through training
Radioactive data: tracing through training
Alexandre Sablayrolles
Matthijs Douze
Cordelia Schmid
Hervé Jégou
38
74
0
03 Feb 2020
Label-Consistent Backdoor Attacks
Label-Consistent Backdoor Attacks
Alexander Turner
Dimitris Tsipras
A. Madry
AAML
11
383
0
05 Dec 2019
Revealing Perceptible Backdoors, without the Training Set, via the
  Maximum Achievable Misclassification Fraction Statistic
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic
Zhen Xiang
David J. Miller
Hang Wang
G. Kesidis
AAML
34
9
0
18 Nov 2019
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems
  With Limited Data
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems With Limited Data
Xinyun Chen
Wenxiao Wang
Chris Bender
Yiming Ding
R. Jia
Bo-wen Li
D. Song
AAML
27
107
0
17 Nov 2019
RIGA: Covert and Robust White-Box Watermarking of Deep Neural Networks
RIGA: Covert and Robust White-Box Watermarking of Deep Neural Networks
Tianhao Wang
Florian Kerschbaum
AAML
19
36
0
31 Oct 2019
A Unified Framework for Data Poisoning Attack to Graph-based
  Semi-supervised Learning
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning
Xuanqing Liu
Si Si
Xiaojin Zhu
Yang Li
Cho-Jui Hsieh
AAML
35
78
0
30 Oct 2019
Defending Neural Backdoors via Generative Distribution Modeling
Defending Neural Backdoors via Generative Distribution Modeling
Ximing Qiao
Yukun Yang
H. Li
AAML
21
183
0
10 Oct 2019
Detecting AI Trojans Using Meta Neural Analysis
Detecting AI Trojans Using Meta Neural Analysis
Xiaojun Xu
Qi Wang
Huichen Li
Nikita Borisov
Carl A. Gunter
Bo-wen Li
28
321
0
08 Oct 2019
Detection of Backdoors in Trained Classifiers Without Access to the
  Training Set
Detection of Backdoors in Trained Classifiers Without Access to the Training Set
Zhen Xiang
David J. Miller
G. Kesidis
AAML
24
23
0
27 Aug 2019
Februus: Input Purification Defense Against Trojan Attacks on Deep
  Neural Network Systems
Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems
Bao Gia Doan
Ehsan Abbasnejad
Damith C. Ranasinghe
AAML
27
66
0
09 Aug 2019
Helen: Maliciously Secure Coopetitive Learning for Linear Models
Helen: Maliciously Secure Coopetitive Learning for Linear Models
Wenting Zheng
Raluca A. Popa
Joseph E. Gonzalez
Ion Stoica
FedML
27
144
0
16 Jul 2019
Effectiveness of Distillation Attack and Countermeasure on Neural
  Network Watermarking
Effectiveness of Distillation Attack and Countermeasure on Neural Network Watermarking
Ziqi Yang
Hung Dang
E. Chang
AAML
27
34
0
14 Jun 2019
Bypassing Backdoor Detection Algorithms in Deep Learning
Bypassing Backdoor Detection Algorithms in Deep Learning
T. Tan
Reza Shokri
FedML
AAML
39
149
0
31 May 2019
A backdoor attack against LSTM-based text classification systems
A backdoor attack against LSTM-based text classification systems
Jiazhu Dai
Chuanshuai Chen
SILM
8
320
0
29 May 2019
Learning to Confuse: Generating Training Time Adversarial Data with
  Auto-Encoder
Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder
Ji Feng
Qi-Zhi Cai
Zhi-Hua Zhou
AAML
19
104
0
22 May 2019
Assuring the Machine Learning Lifecycle: Desiderata, Methods, and
  Challenges
Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges
Rob Ashmore
R. Calinescu
Colin Paterson
AI4TS
27
116
0
10 May 2019
Adversarial Learning in Statistical Classification: A Comprehensive
  Review of Defenses Against Attacks
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
David J. Miller
Zhen Xiang
G. Kesidis
AAML
19
35
0
12 Apr 2019
A Target-Agnostic Attack on Deep Models: Exploiting Security
  Vulnerabilities of Transfer Learning
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
Shahbaz Rezaei
Xin Liu
SILM
AAML
33
46
0
08 Apr 2019
Design of intentional backdoors in sequential models
Design of intentional backdoors in sequential models
Zhaoyuan Yang
N. Iyer
Johan Reimann
Nurali Virani
SILM
AAML
17
38
0
26 Feb 2019
A new Backdoor Attack in CNNs by training set corruption without label
  poisoning
A new Backdoor Attack in CNNs by training set corruption without label poisoning
Mauro Barni
Kassem Kallas
B. Tondi
AAML
29
348
0
12 Feb 2019
Contamination Attacks and Mitigation in Multi-Party Machine Learning
Contamination Attacks and Mitigation in Multi-Party Machine Learning
Jamie Hayes
O. Ohrimenko
AAML
FedML
19
74
0
08 Jan 2019
Backdooring Convolutional Neural Networks via Targeted Weight
  Perturbations
Backdooring Convolutional Neural Networks via Targeted Weight Perturbations
Jacob Dumford
Walter J. Scheirer
AAML
22
116
0
07 Dec 2018
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
191
1,033
0
29 Nov 2018
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning
K. Makarychev
Pascal Dupré
Yury Makarychev
Giancarlo Pellegrino
Dan Boneh
AAML
29
64
0
08 Nov 2018
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Pang Wei Koh
Jacob Steinhardt
Percy Liang
6
240
0
02 Nov 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of
  Evasion and Poisoning Attacks
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILM
AAML
19
11
0
08 Sep 2018
Mitigating Sybils in Federated Learning Poisoning
Mitigating Sybils in Federated Learning Poisoning
Clement Fung
Chris J. M. Yoon
Ivan Beschastnikh
AAML
15
497
0
14 Aug 2018
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural
  Networks
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
Kang Liu
Brendan Dolan-Gavitt
S. Garg
AAML
24
1,021
0
30 May 2018
Hu-Fu: Hardware and Software Collaborative Attack Framework against
  Neural Networks
Hu-Fu: Hardware and Software Collaborative Attack Framework against Neural Networks
Wenshuo Li
Jincheng Yu
Xuefei Ning
Pengjun Wang
Qi Wei
Yu Wang
Huazhong Yang
AAML
39
61
0
14 May 2018
Technical Report: When Does Machine Learning FAIL? Generalized
  Transferability for Evasion and Poisoning Attacks
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
AAML
31
286
0
19 Mar 2018
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
40
1,390
0
08 Dec 2017
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
308
5,847
0
08 Jul 2016
Learning Deep Face Representation
Learning Deep Face Representation
Haoqiang Fan
Zhimin Cao
Yuning Jiang
Qi Yin
Chinchilla Doudou
SSL
CVBM
52
110
0
12 Mar 2014
Previous
12345678