ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.00636
  4. Cited By
Spectral Signatures in Backdoor Attacks

Spectral Signatures in Backdoor Attacks

1 November 2018
Brandon Tran
Jerry Li
A. Madry
    AAML
ArXivPDFHTML

Papers citing "Spectral Signatures in Backdoor Attacks"

50 / 178 papers shown
Title
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Jimmy Z. Di
Jack Douglas
Jayadev Acharya
Gautam Kamath
Ayush Sekhari
MU
32
44
0
21 Dec 2022
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Zeyang Sha
Xinlei He
Pascal Berrang
Mathias Humbert
Yang Zhang
AAML
13
34
0
18 Dec 2022
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of
  Backdoor Effects in Trojaned Machine Learning Models
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models
Rui Zhu
Di Tang
Siyuan Tang
Xiaofeng Wang
Haixu Tang
AAML
FedML
37
13
0
09 Dec 2022
XRand: Differentially Private Defense against Explanation-Guided Attacks
XRand: Differentially Private Defense against Explanation-Guided Attacks
Truc D. T. Nguyen
Phung Lai
Nhathai Phan
My T. Thai
AAML
SILM
30
14
0
08 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
Backdoor Cleansing with Unlabeled Data
Backdoor Cleansing with Unlabeled Data
Lu Pang
Tao Sun
Haibin Ling
Chao Chen
AAML
47
18
0
22 Nov 2022
Don't Watch Me: A Spatio-Temporal Trojan Attack on
  Deep-Reinforcement-Learning-Augment Autonomous Driving
Don't Watch Me: A Spatio-Temporal Trojan Attack on Deep-Reinforcement-Learning-Augment Autonomous Driving
Yinbo Yu
Jiajia Liu
24
1
0
22 Nov 2022
Provable Defense against Backdoor Policies in Reinforcement Learning
Provable Defense against Backdoor Policies in Reinforcement Learning
S. Bharti
Xuezhou Zhang
Adish Singla
Xiaojin Zhu
AAML
12
19
0
18 Nov 2022
Dormant Neural Trojans
Dormant Neural Trojans
Feisi Fu
Panagiota Kiourti
Wenchao Li
AAML
30
0
0
02 Nov 2022
Poison Attack and Defense on Deep Source Code Processing Models
Poison Attack and Defense on Deep Source Code Processing Models
Jia Li
Zhuo Li
Huangzhao Zhang
Ge Li
Zhi Jin
Xing Hu
Xin Xia
AAML
48
16
0
31 Oct 2022
Training set cleansing of backdoor poisoning by self-supervised
  representation learning
Training set cleansing of backdoor poisoning by self-supervised representation learning
H. Wang
Soroush Karami
Ousmane Amadou Dia
H. Ritter
E. Emamjomeh-Zadeh
J. Chen
Zhen Xiang
D. J. Miller
G. Kesidis
SSL
35
4
0
19 Oct 2022
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Khoa D. Doan
Yingjie Lao
Ping Li
36
40
0
17 Oct 2022
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning?
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning?
Yi Zeng
Minzhou Pan
Himanshu Jahagirdar
Ming Jin
Lingjuan Lyu
R. Jia
AAML
39
21
0
12 Oct 2022
Understanding Impacts of Task Similarity on Backdoor Attack and
  Detection
Understanding Impacts of Task Similarity on Backdoor Attack and Detection
Di Tang
Rui Zhu
Xiaofeng Wang
Haixu Tang
Yi Chen
AAML
24
5
0
12 Oct 2022
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning Datasets
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning Datasets
Chen Gong
Zhou Yang
Yunru Bai
Junda He
Jieke Shi
...
Arunesh Sinha
Bowen Xu
Xinwen Hou
David Lo
Guoliang Fan
AAML
OffRL
29
7
0
07 Oct 2022
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset
  Copyright Protection
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection
Yiming Li
Yang Bai
Yong Jiang
Yong-Liang Yang
Shutao Xia
Bo Li
AAML
56
98
0
27 Sep 2022
Federated Learning based on Defending Against Data Poisoning Attacks in
  IoT
Federated Learning based on Defending Against Data Poisoning Attacks in IoT
Jiayin Li
Wenzhong Guo
Xingshuo Han
Jianping Cai
Ximeng Liu
AAML
83
1
0
14 Sep 2022
Universal Backdoor Attacks Detection via Adaptive Adversarial Probe
Universal Backdoor Attacks Detection via Adaptive Adversarial Probe
Yuhang Wang
Huafeng Shi
Rui Min
Ruijia Wu
Siyuan Liang
Yichao Wu
Ding Liang
Aishan Liu
AAML
45
7
0
12 Sep 2022
Defending Against Backdoor Attack on Graph Nerual Network by
  Explainability
Defending Against Backdoor Attack on Graph Nerual Network by Explainability
B. Jiang
Zhao Li
AAML
GNN
64
16
0
07 Sep 2022
An Adaptive Black-box Defense against Trojan Attacks (TrojDef)
An Adaptive Black-box Defense against Trojan Attacks (TrojDef)
Guanxiong Liu
Abdallah Khreishah
Fatima Sharadgah
Issa M. Khalil
AAML
33
8
0
05 Sep 2022
Data Isotopes for Data Provenance in DNNs
Data Isotopes for Data Provenance in DNNs
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
20
12
0
29 Aug 2022
RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact
  DNN
RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN
Huy Phan
Cong Shi
Yi Xie
Tian-Di Zhang
Zhuohang Li
Tianming Zhao
Jian-Dong Liu
Yan Wang
Ying-Cong Chen
Bo Yuan
AAML
35
6
0
22 Aug 2022
An anomaly detection approach for backdoored neural networks: face
  recognition as a case study
An anomaly detection approach for backdoored neural networks: face recognition as a case study
A. Unnervik
S´ebastien Marcel
AAML
29
4
0
22 Aug 2022
Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for
  Image Classifier Models
Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for Image Classifier Models
Yulong Wang
Minghui Zhao
Shenghong Li
Xinnan Yuan
W. Ni
18
15
0
19 Aug 2022
Friendly Noise against Adversarial Noise: A Powerful Defense against
  Data Poisoning Attacks
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
39
27
0
14 Aug 2022
Data-free Backdoor Removal based on Channel Lipschitzness
Data-free Backdoor Removal based on Channel Lipschitzness
Runkai Zheng
Rong Tang
Jianze Li
Li Liu
AAML
29
104
0
05 Aug 2022
Backdoor Attacks on Crowd Counting
Backdoor Attacks on Crowd Counting
Yuhua Sun
Tailai Zhang
Xingjun Ma
Pan Zhou
Jian Lou
Zichuan Xu
Xing Di
Yu Cheng
Lichao
AAML
19
15
0
12 Jul 2022
Natural Backdoor Datasets
Natural Backdoor Datasets
Emily Wenger
Roma Bhattacharjee
A. Bhagoji
Josephine Passananti
Emilio Andere
Haitao Zheng
Ben Y. Zhao
AAML
35
4
0
21 Jun 2022
DECK: Model Hardening for Defending Pervasive Backdoors
DECK: Model Hardening for Defending Pervasive Backdoors
Guanhong Tao
Yingqi Liu
Shuyang Cheng
Shengwei An
Zhuo Zhang
Qiuling Xu
Guangyu Shen
Xiangyu Zhang
AAML
26
7
0
18 Jun 2022
DORA: Exploring Outlier Representations in Deep Neural Networks
DORA: Exploring Outlier Representations in Deep Neural Networks
Kirill Bykov
Mayukh Deb
Dennis Grinwald
Klaus-Robert Muller
Marina M.-C. Höhne
27
12
0
09 Jun 2022
On the Permanence of Backdoors in Evolving Models
On the Permanence of Backdoors in Evolving Models
Huiying Li
A. Bhagoji
Yuxin Chen
Haitao Zheng
Ben Y. Zhao
AAML
37
2
0
08 Jun 2022
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of
  Source-Specific Backdoor Defences
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences
Shang Wang
Yansong Gao
Anmin Fu
Zhi-Li Zhang
Yuqing Zhang
W. Susilo
Dongxi Liu
AAML
31
10
0
31 May 2022
Contributor-Aware Defenses Against Adversarial Backdoor Attacks
Contributor-Aware Defenses Against Adversarial Backdoor Attacks
Glenn Dawson
Muhammad Umer
R. Polikar
AAML
33
0
0
28 May 2022
BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural
  Networks via Image Quantization and Contrastive Adversarial Learning
BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning
Zhenting Wang
Juan Zhai
Shiqing Ma
AAML
136
97
0
26 May 2022
On Collective Robustness of Bagging Against Data Poisoning
On Collective Robustness of Bagging Against Data Poisoning
Ruoxin Chen
Zenan Li
Jie Li
Chentao Wu
Junchi Yan
61
23
0
26 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
  Contrastive Learning
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
33
34
0
13 May 2022
A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning
A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning
Yinbo Yu
Jiajia Liu
Shouqing Li
Ke Huang
Xudong Feng
AAML
42
11
0
05 May 2022
Detecting Backdoor Poisoning Attacks on Deep Neural Networks by Heatmap
  Clustering
Detecting Backdoor Poisoning Attacks on Deep Neural Networks by Heatmap Clustering
Lukas Schulth
Christian Berghoff
Matthias Neu
AAML
19
5
0
27 Apr 2022
Streaming Algorithms for High-Dimensional Robust Statistics
Streaming Algorithms for High-Dimensional Robust Statistics
Ilias Diakonikolas
D. Kane
Ankit Pensia
Thanasis Pittas
19
21
0
26 Apr 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
43
24
0
19 Apr 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
51
109
0
31 Mar 2022
Low-Loss Subspace Compression for Clean Gains against Multi-Agent
  Backdoor Attacks
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks
Siddhartha Datta
N. Shadbolt
AAML
32
6
0
07 Mar 2022
Differentially Private Label Protection in Split Learning
Differentially Private Label Protection in Split Learning
Xin Yang
Jiankai Sun
Yuanshun Yao
Junyuan Xie
Chong-Jun Wang
FedML
47
36
0
04 Mar 2022
Label Leakage and Protection from Forward Embedding in Vertical
  Federated Learning
Label Leakage and Protection from Forward Embedding in Vertical Federated Learning
Jiankai Sun
Xin Yang
Yuanshun Yao
Chong-Jun Wang
FedML
38
37
0
02 Mar 2022
Holistic Adversarial Robustness of Deep Learning Models
Holistic Adversarial Robustness of Deep Learning Models
Pin-Yu Chen
Sijia Liu
AAML
51
16
0
15 Feb 2022
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Limin Yang
Zhi Chen
Jacopo Cortellazzi
Feargus Pendlebury
Kevin Tu
Fabio Pierazzi
Lorenzo Cavallaro
Gang Wang
AAML
26
36
0
11 Feb 2022
Backdoor Defense via Decoupling the Training Process
Backdoor Defense via Decoupling the Training Process
Kunzhe Huang
Yiming Li
Baoyuan Wu
Zhan Qin
Kui Ren
AAML
FedML
29
187
0
05 Feb 2022
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That
  Backfire
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire
Siddhartha Datta
N. Shadbolt
AAML
38
7
0
28 Jan 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence
  Estimation
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Zayd Hammoudeh
Daniel Lowd
TDI
24
28
0
25 Jan 2022
Hiding Behind Backdoors: Self-Obfuscation Against Generative Models
Hiding Behind Backdoors: Self-Obfuscation Against Generative Models
Siddhartha Datta
N. Shadbolt
SILM
AAML
AI4CE
25
2
0
24 Jan 2022
Previous
1234
Next