ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.05526
  4. Cited By
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

15 December 2017
Xinyun Chen
Chang-rui Liu
Bo-wen Li
Kimberly Lu
D. Song
    AAML
    SILM
ArXivPDFHTML

Papers citing "Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning"

50 / 361 papers shown
Title
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers
Stefanos Koffas
Jing Xu
Mauro Conti
S. Picek
AAML
22
66
0
30 Jul 2021
Subnet Replacement: Deployment-stage backdoor attack against deep neural
  networks in gray-box setting
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting
Xiangyu Qi
Jifeng Zhu
Chulin Xie
Yong-Liang Yang
AAML
66
35
0
15 Jul 2021
Trustworthy AI: A Computational Perspective
Trustworthy AI: A Computational Perspective
Haochen Liu
Yiqi Wang
Wenqi Fan
Xiaorui Liu
Yaxin Li
Shaili Jain
Yunhao Liu
Anil K. Jain
Jiliang Tang
FaML
104
196
0
12 Jul 2021
Understanding the Limits of Unsupervised Domain Adaptation via Data
  Poisoning
Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Jihun Hamm
AAML
30
22
0
08 Jul 2021
The Threat of Offensive AI to Organizations
The Threat of Offensive AI to Organizations
Yisroel Mirsky
Ambra Demontis
J. Kotak
Ram Shankar
Deng Gelei
Liu Yang
Xinming Zhang
Wenke Lee
Yuval Elovici
Battista Biggio
33
81
0
30 Jun 2021
Data Poisoning Won't Save You From Facial Recognition
Data Poisoning Won't Save You From Facial Recognition
Evani Radiya-Dixit
Sanghyun Hong
Nicholas Carlini
Florian Tramèr
AAML
PICV
15
57
0
28 Jun 2021
Adversarial Examples Make Strong Poisons
Adversarial Examples Make Strong Poisons
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
32
132
0
21 Jun 2021
Evaluating the Robustness of Trigger Set-Based Watermarks Embedded in
  Deep Neural Networks
Evaluating the Robustness of Trigger Set-Based Watermarks Embedded in Deep Neural Networks
Suyoung Lee
Wonho Song
Suman Jana
M. Cha
Sooel Son
AAML
11
13
0
18 Jun 2021
Poisoning and Backdooring Contrastive Learning
Poisoning and Backdooring Contrastive Learning
Nicholas Carlini
Andreas Terzis
46
156
0
17 Jun 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks
  Trained from Scratch
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
31
124
0
16 Jun 2021
Machine Learning with Electronic Health Records is vulnerable to
  Backdoor Trigger Attacks
Machine Learning with Electronic Health Records is vulnerable to Backdoor Trigger Attacks
Byunggill Joe
Akshay Mehra
I. Shin
Jihun Hamm
11
9
0
15 Jun 2021
Poisoning Deep Reinforcement Learning Agents with In-Distribution
  Triggers
Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers
C. Ashcraft
Kiran Karra
23
22
0
14 Jun 2021
Topological Detection of Trojaned Neural Networks
Topological Detection of Trojaned Neural Networks
Songzhu Zheng
Yikai Zhang
H. Wagner
Mayank Goswami
Chao Chen
AAML
29
40
0
11 Jun 2021
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word
  Substitution
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution
Fanchao Qi
Yuan Yao
Sophia Xu
Zhiyuan Liu
Maosong Sun
SILM
24
126
0
11 Jun 2021
Defending Against Backdoor Attacks in Natural Language Generation
Defending Against Backdoor Attacks in Natural Language Generation
Xiaofei Sun
Xiaoya Li
Yuxian Meng
Xiang Ao
Fei Wu
Jiwei Li
Tianwei Zhang
AAML
SILM
31
47
0
03 Jun 2021
A BIC-based Mixture Model Defense against Data Poisoning Attacks on
  Classifiers
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers
Xi Li
David J. Miller
Zhen Xiang
G. Kesidis
AAML
16
0
0
28 May 2021
Preventing Machine Learning Poisoning Attacks Using Authentication and
  Provenance
Preventing Machine Learning Poisoning Attacks Using Authentication and Provenance
Jack W. Stokes
P. England
K. Kane
AAML
13
14
0
20 May 2021
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
Jian Chen
Xuxin Zhang
Rui Zhang
Chen Wang
Ling Liu
AAML
25
86
0
08 May 2021
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
152
68
0
04 May 2021
A Master Key Backdoor for Universal Impersonation Attack against
  DNN-based Face Verification
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification
Wei Guo
B. Tondi
Mauro Barni
AAML
25
19
0
01 May 2021
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
J. Hayase
Weihao Kong
Raghav Somani
Sewoong Oh
AAML
24
149
0
22 Apr 2021
Turning Federated Learning Systems Into Covert Channels
Turning Federated Learning Systems Into Covert Channels
Gabriele Costa
Fabio Pinelli
S. Soderi
Gabriele Tolomei
FedML
37
10
0
21 Apr 2021
Manipulating SGD with Data Ordering Attacks
Manipulating SGD with Data Ordering Attacks
Ilia Shumailov
Zakhar Shumaylov
Dmitry Kazhdan
Yiren Zhao
Nicolas Papernot
Murat A. Erdogdu
Ross J. Anderson
AAML
112
90
0
19 Apr 2021
Robust Backdoor Attacks against Deep Neural Networks in Real Physical
  World
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World
Mingfu Xue
Can He
Shichang Sun
Jian Wang
Weiqiang Liu
AAML
34
43
0
15 Apr 2021
Mitigating Adversarial Attack for Compute-in-Memory Accelerator
  Utilizing On-chip Finetune
Mitigating Adversarial Attack for Compute-in-Memory Accelerator Utilizing On-chip Finetune
Shanshi Huang
Hongwu Jiang
Shimeng Yu
AAML
26
3
0
13 Apr 2021
A Backdoor Attack against 3D Point Cloud Classifiers
A Backdoor Attack against 3D Point Cloud Classifiers
Zhen Xiang
David J. Miller
Siheng Chen
Xi Li
G. Kesidis
3DPC
AAML
28
76
0
12 Apr 2021
Backdoor Attack in the Physical World
Backdoor Attack in the Physical World
Yiming Li
Tongqing Zhai
Yong Jiang
Zhifeng Li
Shutao Xia
26
109
0
06 Apr 2021
PointBA: Towards Backdoor Attacks in 3D Point Cloud
PointBA: Towards Backdoor Attacks in 3D Point Cloud
Xinke Li
Zhirui Chen
Yue Zhao
Zekun Tong
Yabang Zhao
A. Lim
Qiufeng Wang
3DPC
AAML
60
51
0
30 Mar 2021
Black-box Detection of Backdoor Attacks with Limited Information and
  Data
Black-box Detection of Backdoor Attacks with Limited Information and Data
Yinpeng Dong
Xiao Yang
Zhijie Deng
Tianyu Pang
Zihao Xiao
Hang Su
Jun Zhu
AAML
21
112
0
24 Mar 2021
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison
  Linear Classifiers?
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
Antonio Emanuele Cinà
Sebastiano Vascon
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
24
9
0
23 Mar 2021
TOP: Backdoor Detection in Neural Networks via Transferability of
  Perturbation
TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation
Todd P. Huster
E. Ekwedike
SILM
24
19
0
18 Mar 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
30
8
0
16 Mar 2021
Hidden Backdoor Attack against Semantic Segmentation Models
Hidden Backdoor Attack against Semantic Segmentation Models
Yiming Li
Yanjie Li
Yalei Lv
Yong Jiang
Shutao Xia
AAML
134
30
0
06 Mar 2021
Data Poisoning Attacks and Defenses to Crowdsourcing Systems
Data Poisoning Attacks and Defenses to Crowdsourcing Systems
Minghong Fang
Minghao Sun
Qi Li
Neil Zhenqiang Gong
Jinhua Tian
Jia-Wei Liu
52
34
0
18 Feb 2021
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through
  Neural Payload Injection
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection
Yuanchun Li
Jiayi Hua
Haoyu Wang
Chunyang Chen
Yunxin Liu
FedML
SILM
86
75
0
18 Jan 2021
Unlearnable Examples: Making Personal Data Unexploitable
Unlearnable Examples: Making Personal Data Unexploitable
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
MIACV
156
190
0
13 Jan 2021
DeepiSign: Invisible Fragile Watermark to Protect the Integrityand
  Authenticity of CNN
DeepiSign: Invisible Fragile Watermark to Protect the Integrityand Authenticity of CNN
A. Abuadbba
Hyoungshick Kim
Surya Nepal
14
16
0
12 Jan 2021
DeepPoison: Feature Transfer Based Stealthy Poisoning Attack
DeepPoison: Feature Transfer Based Stealthy Poisoning Attack
Jinyin Chen
Longyuan Zhang
Haibin Zheng
Xueke Wang
Zhaoyan Ming
AAML
27
19
0
06 Jan 2021
Fidel: Reconstructing Private Training Samples from Weight Updates in
  Federated Learning
Fidel: Reconstructing Private Training Samples from Weight Updates in Federated Learning
David Enthoven
Zaid Al-Ars
FedML
57
14
0
01 Jan 2021
Hardware and Software Optimizations for Accelerating Deep Neural
  Networks: Survey of Current Trends, Challenges, and the Road Ahead
Hardware and Software Optimizations for Accelerating Deep Neural Networks: Survey of Current Trends, Challenges, and the Road Ahead
Maurizio Capra
Beatrice Bussolino
Alberto Marchisio
Guido Masera
Maurizio Martina
Muhammad Shafique
BDL
59
140
0
21 Dec 2020
Deep Feature Space Trojan Attack of Neural Networks by Controlled
  Detoxification
Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification
Shuyang Cheng
Yingqi Liu
Shiqing Ma
Xinming Zhang
AAML
31
154
0
21 Dec 2020
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
27
270
0
18 Dec 2020
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural
  Backdoors
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors
Ren Pang
Zheng-Wei Zhang
Xiangshan Gao
Zhaohan Xi
S. Ji
Peng Cheng
Xiapu Luo
Ting Wang
AAML
29
31
0
16 Dec 2020
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor
  Attacks for Data Collection Scenarios
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios
Hassan Ali
Surya Nepal
S. Kanhere
S. Jha
AAML
9
12
0
14 Dec 2020
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks
  using Data Augmentation
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation
Han Qiu
Yi Zeng
Shangwei Guo
Tianwei Zhang
Meikang Qiu
B. Thuraisingham
AAML
24
191
0
13 Dec 2020
Privacy and Robustness in Federated Learning: Attacks and Defenses
Privacy and Robustness in Federated Learning: Attacks and Defenses
Lingjuan Lyu
Han Yu
Xingjun Ma
Chen Chen
Lichao Sun
Jun Zhao
Qiang Yang
Philip S. Yu
FedML
183
355
0
07 Dec 2020
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
Fanchao Qi
Yangyi Chen
Mukai Li
Yuan Yao
Zhiyuan Liu
Maosong Sun
AAML
28
264
0
20 Nov 2020
Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly
  Detection
Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly Detection
Hao Fu
A. Veldanda
Prashanth Krishnamurthy
S. Garg
Farshad Khorrami
AAML
33
14
0
04 Nov 2020
Pitfalls in Machine Learning Research: Reexamining the Development Cycle
Pitfalls in Machine Learning Research: Reexamining the Development Cycle
Stella Biderman
Walter J. Scheirer
24
26
0
04 Nov 2020
Being Single Has Benefits. Instance Poisoning to Deceive Malware
  Classifiers
Being Single Has Benefits. Instance Poisoning to Deceive Malware Classifiers
T. Shapira
David Berend
Ishai Rosenberg
Yang Liu
A. Shabtai
Yuval Elovici
AAML
24
4
0
30 Oct 2020
Previous
12345678
Next