Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1808.10307
Cited By
Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation
30 August 2018
C. Liao
Haoti Zhong
Anna Squicciarini
Sencun Zhu
David J. Miller
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation"
31 / 81 papers shown
Title
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World
Mingfu Xue
Can He
Shichang Sun
Jian Wang
Weiqiang Liu
AAML
34
43
0
15 Apr 2021
A Backdoor Attack against 3D Point Cloud Classifiers
Zhen Xiang
David J. Miller
Siheng Chen
Xi Li
G. Kesidis
3DPC
AAML
38
76
0
12 Apr 2021
Backdoor Attack in the Physical World
Yiming Li
Tongqing Zhai
Yong Jiang
Zhifeng Li
Shutao Xia
32
109
0
06 Apr 2021
Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles
G. Cantareira
R. Mello
F. Paulovich
AAML
29
9
0
18 Mar 2021
Meta Federated Learning
Omid Aramoon
Pin-Yu Chen
Gang Qu
Yuan Tian
AAML
FedML
23
13
0
10 Feb 2021
Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification
Shuyang Cheng
Yingqi Liu
Shiqing Ma
Xinming Zhang
AAML
31
154
0
21 Dec 2020
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios
Hassan Ali
Surya Nepal
S. Kanhere
S. Jha
AAML
27
12
0
14 Dec 2020
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation
Han Qiu
Yi Zeng
Shangwei Guo
Tianwei Zhang
Meikang Qiu
B. Thuraisingham
AAML
24
191
0
13 Dec 2020
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
Fanchao Qi
Yangyi Chen
Mukai Li
Yuan Yao
Zhiyuan Liu
Maosong Sun
AAML
45
266
0
20 Nov 2020
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing
Zhen Xiang
David J. Miller
G. Kesidis
35
22
0
15 Oct 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
34
93
0
22 Sep 2020
Blackbox Trojanising of Deep Learning Models : Using non-intrusive network structure and binary alterations
Jonathan Pan
AAML
14
3
0
02 Aug 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
221
0
21 Jul 2020
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
45
592
0
17 Jul 2020
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Yunfei Liu
Xingjun Ma
James Bailey
Feng Lu
AAML
22
505
0
05 Jul 2020
Backdoor Attacks Against Deep Learning Systems in the Physical World
Emily Wenger
Josephine Passananti
A. Bhagoji
Yuanshun Yao
Haitao Zheng
Ben Y. Zhao
AAML
31
200
0
25 Jun 2020
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks
Ruixiang Tang
Mengnan Du
Ninghao Liu
Fan Yang
Xia Hu
AAML
23
184
0
15 Jun 2020
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
46
298
0
08 May 2020
A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects
Zewen Li
Wenjie Yang
Shouheng Peng
Fan Liu
HAI
3DV
64
2,608
0
01 Apr 2020
Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement Learning-based Traffic Congestion Control Systems
Yue Wang
Esha Sarkar
Wenqing Li
Michail Maniatakos
Saif Eddin Jabari
AAML
31
60
0
17 Mar 2020
Generating Semantic Adversarial Examples via Feature Manipulation
Shuo Wang
Surya Nepal
Carsten Rudolph
M. Grobler
Shangyu Chen
Tianle Chen
AAML
31
12
0
06 Jan 2020
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic
Zhen Xiang
David J. Miller
Hang Wang
G. Kesidis
AAML
34
9
0
18 Nov 2019
Defending Neural Backdoors via Generative Distribution Modeling
Ximing Qiao
Yukun Yang
H. Li
AAML
24
183
0
10 Oct 2019
Detecting AI Trojans Using Meta Neural Analysis
Xiaojun Xu
Qi Wang
Huichen Li
Nikita Borisov
Carl A. Gunter
Bo Li
43
321
0
08 Oct 2019
Hidden Trigger Backdoor Attacks
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
36
613
0
30 Sep 2019
Detection of Backdoors in Trained Classifiers Without Access to the Training Set
Zhen Xiang
David J. Miller
G. Kesidis
AAML
30
23
0
27 Aug 2019
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Soheil Kolouri
Aniruddha Saha
Hamed Pirsiavash
Heiko Hoffmann
AAML
33
231
0
26 Jun 2019
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
David J. Miller
Zhen Xiang
G. Kesidis
AAML
19
35
0
12 Apr 2019
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
Shahbaz Rezaei
Xin Liu
SILM
AAML
33
46
0
08 Apr 2019
A new Backdoor Attack in CNNs by training set corruption without label poisoning
Mauro Barni
Kassem Kallas
B. Tondi
AAML
35
348
0
12 Feb 2019
Programmable Neural Network Trojan for Pre-Trained Feature Extractor
Yu Ji
Zixin Liu
Xing Hu
Peiqi Wang
Youhui Zhang
AAML
19
17
0
23 Jan 2019
Previous
1
2