Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1910.00033
Cited By
Hidden Trigger Backdoor Attacks
30 September 2019
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Hidden Trigger Backdoor Attacks"
50 / 131 papers shown
Title
Dormant Neural Trojans
Feisi Fu
Panagiota Kiourti
Wenchao Li
AAML
30
0
0
02 Nov 2022
The Perils of Learning From Unlabeled Data: Backdoor Attacks on Semi-supervised Learning
Virat Shejwalkar
Lingjuan Lyu
Amir Houmansadr
AAML
29
10
0
01 Nov 2022
Emerging Threats in Deep Learning-Based Autonomous Driving: A Comprehensive Survey
Huiyun Cao
Wenlong Zou
Yinkun Wang
Ting Song
Mengjun Liu
AAML
56
5
0
19 Oct 2022
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning?
Yi Zeng
Minzhou Pan
Himanshu Jahagirdar
Ming Jin
Lingjuan Lyu
R. Jia
AAML
39
21
0
12 Oct 2022
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection
Yiming Li
Yang Bai
Yong Jiang
Yong-Liang Yang
Shutao Xia
Bo Li
AAML
56
98
0
27 Sep 2022
Universal Backdoor Attacks Detection via Adaptive Adversarial Probe
Yuhang Wang
Huafeng Shi
Rui Min
Ruijia Wu
Siyuan Liang
Yichao Wu
Ding Liang
Aishan Liu
AAML
45
7
0
12 Sep 2022
RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN
Huy Phan
Cong Shi
Yi Xie
Tian-Di Zhang
Zhuohang Li
Tianming Zhao
Jian-Dong Liu
Yan Wang
Ying-Cong Chen
Bo Yuan
AAML
35
6
0
22 Aug 2022
Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models
Xinlei He
Zheng Li
Weilin Xu
Cory Cornelius
Yang Zhang
MIACV
38
24
0
22 Aug 2022
Byzantines can also Learn from History: Fall of Centered Clipping in Federated Learning
Kerem Ozfatura
Emre Ozfatura
Alptekin Kupcu
Deniz Gunduz
AAML
FedML
41
13
0
21 Aug 2022
Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for Image Classifier Models
Yulong Wang
Minghui Zhao
Shenghong Li
Xinnan Yuan
W. Ni
18
15
0
19 Aug 2022
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
39
27
0
14 Aug 2022
FRIB: Low-poisoning Rate Invisible Backdoor Attack based on Feature Repair
Hui Xia
Xiugui Yang
X. Qian
Rui Zhang
AAML
32
0
0
26 Jul 2022
Backdoor Attack is a Devil in Federated GAN-based Medical Image Synthesis
Ruinan Jin
Xiaoxiao Li
AAML
FedML
MedIm
44
12
0
02 Jul 2022
Backdoor Attacks on Vision Transformers
Akshayvarun Subramanya
Aniruddha Saha
Soroush Abbasi Koohpayegani
Ajinkya Tejankar
Hamed Pirsiavash
ViT
AAML
21
16
0
16 Jun 2022
Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers
Nan Luo
Yuan-zhang Li
Yajie Wang
Shan-Hung Wu
Yu-an Tan
Quan-xin Zhang
AAML
20
11
0
10 Jun 2022
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences
Shang Wang
Yansong Gao
Anmin Fu
Zhi-Li Zhang
Yuqing Zhang
W. Susilo
Dongxi Liu
AAML
31
10
0
31 May 2022
Hide and Seek: on the Stealthiness of Attacks against Deep Learning Systems
Zeyan Liu
Fengjun Li
Jingqiang Lin
Zhu Li
Bo Luo
AAML
15
1
0
31 May 2022
Integrity Authentication in Tree Models
Weijie Zhao
Yingjie Lao
Ping Li
59
5
0
30 May 2022
BadDet: Backdoor Attacks on Object Detection
Shih-Han Chan
Yinpeng Dong
Junyi Zhu
Xiaolu Zhang
Jun Zhou
AAML
30
56
0
28 May 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
43
24
0
19 Apr 2022
Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information
Yi Zeng
Minzhou Pan
H. Just
Lingjuan Lyu
M. Qiu
R. Jia
AAML
33
171
0
11 Apr 2022
A Survey on Privacy for B5G/6G: New Privacy Challenges, and Research Directions
Chamara Sandeepa
Bartlomiej Siniarski
N. Kourtellis
Shen Wang
Madhusanka Liyanage
31
21
0
08 Mar 2022
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks
Siddhartha Datta
N. Shadbolt
AAML
32
6
0
07 Mar 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
25
37
0
21 Feb 2022
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation
Wenxiao Wang
Alexander Levine
S. Feizi
AAML
20
60
0
05 Feb 2022
Can Adversarial Training Be Manipulated By Non-Robust Features?
Lue Tao
Lei Feng
Hongxin Wei
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
136
16
0
31 Jan 2022
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire
Siddhartha Datta
N. Shadbolt
AAML
38
7
0
28 Jan 2022
SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders
Tianshuo Cong
Xinlei He
Yang Zhang
23
53
0
27 Jan 2022
Post-Training Detection of Backdoor Attacks for Two-Class and Multi-Attack Scenarios
Zhen Xiang
David J. Miller
G. Kesidis
AAML
39
47
0
20 Jan 2022
Safe Distillation Box
Jingwen Ye
Yining Mao
Mingli Song
Xinchao Wang
Cheng Jin
Xiuming Zhang
AAML
24
13
0
05 Dec 2021
Anomaly Localization in Model Gradients Under Backdoor Attacks Against Federated Learning
Z. Bilgin
FedML
AAML
24
1
0
29 Nov 2021
A General Framework for Defending Against Backdoor Attacks via Influence Graph
Xiaofei Sun
Jiwei Li
Xiaoya Li
Ziyao Wang
Tianwei Zhang
Han Qiu
Fei Wu
Chun Fan
AAML
TDI
24
5
0
29 Nov 2021
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
Xiangyu Qi
Tinghao Xie
Ruizhe Pan
Jifeng Zhu
Yong-Liang Yang
Kai Bu
AAML
35
57
0
25 Nov 2021
Backdoor Attack through Frequency Domain
Tong Wang
Yuan Yao
Feng Xu
Shengwei An
Hanghang Tong
Ting Wang
AAML
24
33
0
22 Nov 2021
Get a Model! Model Hijacking Attack Against Machine Learning Models
A. Salem
Michael Backes
Yang Zhang
AAML
25
28
0
08 Nov 2021
Availability Attacks Create Shortcuts
Da Yu
Huishuai Zhang
Wei Chen
Jian Yin
Tie-Yan Liu
AAML
31
57
0
01 Nov 2021
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis
Junfeng Guo
Ang Li
Cong Liu
AAML
67
74
0
28 Oct 2021
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Yige Li
X. Lyu
Nodens Koren
Lingjuan Lyu
Bo-wen Li
Xingjun Ma
OnRL
31
322
0
22 Oct 2021
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
94
50
0
13 Oct 2021
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
Neil G. Marchant
Benjamin I. P. Rubinstein
Scott Alfeld
MU
AAML
28
69
0
17 Sep 2021
Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
Linyang Li
Demin Song
Xiaonan Li
Jiehang Zeng
Ruotian Ma
Xipeng Qiu
33
135
0
31 Aug 2021
Quantization Backdoors to Deep Learning Commercial Frameworks
Hua Ma
Huming Qiu
Yansong Gao
Zhi-Li Zhang
A. Abuadbba
Minhui Xue
Anmin Fu
Jiliang Zhang
S. Al-Sarawi
Derek Abbott
MQ
38
19
0
20 Aug 2021
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine learning-based genomic analysis
Esha Sarkar
Michail Maniatakos
32
3
0
14 Aug 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILM
SSL
42
152
0
01 Aug 2021
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting
Xiangyu Qi
Jifeng Zhu
Chulin Xie
Yong-Liang Yang
AAML
66
35
0
15 Jul 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
31
124
0
16 Jun 2021
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution
Fanchao Qi
Yuan Yao
Sophia Xu
Zhiyuan Liu
Maosong Sun
SILM
33
126
0
11 Jun 2021
Defending Against Backdoor Attacks in Natural Language Generation
Xiaofei Sun
Xiaoya Li
Yuxian Meng
Xiang Ao
Fei Wu
Jiwei Li
Tianwei Zhang
AAML
SILM
31
47
0
03 Jun 2021
Backdoor Attacks on Self-Supervised Learning
Aniruddha Saha
Ajinkya Tejankar
Soroush Abbasi Koohpayegani
Hamed Pirsiavash
SSL
AAML
27
101
0
21 May 2021
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification
Wei Guo
B. Tondi
Mauro Barni
AAML
30
19
0
01 May 2021
Previous
1
2
3
Next