ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.00686
  4. Cited By
Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor
  Contamination Detection

Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection

2 August 2019
Di Tang
Xiaofeng Wang
Haixu Tang
Kehuan Zhang
    AAML
ArXivPDFHTML

Papers citing "Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection"

45 / 45 papers shown
Title
Secure Transfer Learning: Training Clean Models Against Backdoor in (Both) Pre-trained Encoders and Downstream Datasets
Secure Transfer Learning: Training Clean Models Against Backdoor in (Both) Pre-trained Encoders and Downstream Datasets
Wenjie Qu
Yuxuan Zhou
Tianyu Li
Minghui Li
Shengshan Hu
Wei Luo
L. Zhang
AAML
SILM
48
0
0
16 Apr 2025
UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement Learning
Oubo Ma
L. Du
Yang Dai
Chunyi Zhou
Qingming Li
Yuwen Pu
Shouling Ji
48
0
0
28 Jan 2025
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense Evaluation
Haiyang Yu
Tian Xie
Jiaping Gui
Pengyang Wang
P. Yi
Yue Wu
56
1
0
17 Nov 2024
PSBD: Prediction Shift Uncertainty Unlocks Backdoor Detection
PSBD: Prediction Shift Uncertainty Unlocks Backdoor Detection
Wei Li
Pin-Yu Chen
Sijia Liu
Ren Wang
AAML
51
3
0
09 Jun 2024
IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling
  Consistency
IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency
Linshan Hou
Ruili Feng
Zhongyun Hua
Wei Luo
Leo Yu Zhang
Yiming Li
AAML
51
19
0
16 May 2024
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Hongbin Liu
Michael K. Reiter
Neil Zhenqiang Gong
AAML
49
2
0
22 Feb 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
36
16
0
02 Feb 2024
On the Difficulty of Defending Contrastive Learning against Backdoor
  Attacks
On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Changjiang Li
Ren Pang
Bochuan Cao
Zhaohan Xi
Jinghui Chen
Shouling Ji
Ting Wang
AAML
40
6
0
14 Dec 2023
BELT: Old-School Backdoor Attacks can Evade the State-of-the-Art Defense
  with Backdoor Exclusivity Lifting
BELT: Old-School Backdoor Attacks can Evade the State-of-the-Art Defense with Backdoor Exclusivity Lifting
Huming Qiu
Junjie Sun
Mi Zhang
Xudong Pan
Min Yang
AAML
44
4
0
08 Dec 2023
Beating Backdoor Attack at Its Own Game
Beating Backdoor Attack at Its Own Game
Min Liu
Alberto L. Sangiovanni-Vincentelli
Xiangyu Yue
AAML
65
11
0
28 Jul 2023
Evil from Within: Machine Learning Backdoors through Hardware Trojans
Evil from Within: Machine Learning Backdoors through Hardware Trojans
Alexander Warnecke
Julian Speith
Janka Möller
Konrad Rieck
C. Paar
AAML
29
3
0
17 Apr 2023
UNICORN: A Unified Backdoor Trigger Inversion Framework
UNICORN: A Unified Backdoor Trigger Inversion Framework
Zhenting Wang
Kai Mei
Juan Zhai
Shiqing Ma
LLMSV
39
44
0
05 Apr 2023
Defending Against Backdoor Attacks by Layer-wise Feature Analysis
Defending Against Backdoor Attacks by Layer-wise Feature Analysis
N. Jebreel
J. Domingo-Ferrer
Yiming Li
AAML
33
10
0
24 Feb 2023
ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep
  Learning Paradigms
ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms
Minzhou Pan
Yi Zeng
Lingjuan Lyu
Xinyu Lin
R. Jia
AAML
31
35
0
22 Feb 2023
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
60
121
0
17 Jan 2023
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Shuyang Cheng
Guanhong Tao
Yingqi Liu
Shengwei An
Xiangzhe Xu
...
Guangyu Shen
Kaiyuan Zhang
Qiuling Xu
Shiqing Ma
Xiangyu Zhang
AAML
34
15
0
16 Jan 2023
Stealthy Backdoor Attack for Code Models
Stealthy Backdoor Attack for Code Models
Zhou Yang
Bowen Xu
Jie M. Zhang
Hong Jin Kang
Jieke Shi
Junda He
David Lo
AAML
26
65
0
06 Jan 2023
Backdoor Attacks Against Dataset Distillation
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
47
28
0
03 Jan 2023
Look, Listen, and Attack: Backdoor Attacks Against Video Action
  Recognition
Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition
Hasan Hammoud
Shuming Liu
Mohammad Alkhrashi
Fahad Albalawi
Guohao Li
AAML
47
8
0
03 Jan 2023
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
38
75
0
29 Dec 2022
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of
  Backdoor Effects in Trojaned Machine Learning Models
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models
Rui Zhu
Di Tang
Siyuan Tang
Xiaofeng Wang
Haixu Tang
AAML
FedML
37
13
0
09 Dec 2022
An Embarrassingly Simple Backdoor Attack on Self-supervised Learning
An Embarrassingly Simple Backdoor Attack on Self-supervised Learning
Changjiang Li
Ren Pang
Zhaohan Xi
Tianyu Du
S. Ji
Yuan Yao
Ting Wang
AAML
36
25
0
13 Oct 2022
Understanding Impacts of Task Similarity on Backdoor Attack and
  Detection
Understanding Impacts of Task Similarity on Backdoor Attack and Detection
Di Tang
Rui Zhu
Xiaofeng Wang
Haixu Tang
Yi Chen
AAML
29
5
0
12 Oct 2022
Data Isotopes for Data Provenance in DNNs
Data Isotopes for Data Provenance in DNNs
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
25
12
0
29 Aug 2022
DECK: Model Hardening for Defending Pervasive Backdoors
DECK: Model Hardening for Defending Pervasive Backdoors
Guanhong Tao
Yingqi Liu
Shuyang Cheng
Shengwei An
Zhuo Zhang
Qiuling Xu
Guangyu Shen
Xiangyu Zhang
AAML
31
7
0
18 Jun 2022
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of
  Source-Specific Backdoor Defences
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences
Shang Wang
Yansong Gao
Anmin Fu
Zhi-Li Zhang
Yuqing Zhang
W. Susilo
Dongxi Liu
AAML
33
10
0
31 May 2022
Machine Learning Security against Data Poisoning: Are We There Yet?
Machine Learning Security against Data Poisoning: Are We There Yet?
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
32
35
0
12 Apr 2022
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Limin Yang
Zhi Chen
Jacopo Cortellazzi
Feargus Pendlebury
Kevin Tu
Fabio Pierazzi
Lorenzo Cavallaro
Gang Wang
AAML
31
36
0
11 Feb 2022
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural
  Networks
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
Xiangyu Qi
Tinghao Xie
Ruizhe Pan
Jifeng Zhu
Yong-Liang Yang
Kai Bu
AAML
35
57
0
25 Nov 2021
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving
  Adversarial Outcomes
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes
Sanghyun Hong
Michael-Andrei Panaitescu-Liess
Yigitcan Kaya
Tudor Dumitras
MQ
60
13
0
26 Oct 2021
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Yige Li
X. Lyu
Nodens Koren
Lingjuan Lyu
Bo Li
Xingjun Ma
OnRL
31
322
0
22 Oct 2021
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
99
50
0
13 Oct 2021
Quantization Backdoors to Deep Learning Commercial Frameworks
Quantization Backdoors to Deep Learning Commercial Frameworks
Hua Ma
Huming Qiu
Yansong Gao
Zhi-Li Zhang
A. Abuadbba
Minhui Xue
Anmin Fu
Jiliang Zhang
S. Al-Sarawi
Derek Abbott
MQ
40
19
0
20 Aug 2021
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine
  learning-based genomic analysis
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine learning-based genomic analysis
Esha Sarkar
Michail Maniatakos
37
3
0
14 Aug 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised
  Learning
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILM
SSL
47
152
0
01 Aug 2021
Hidden Backdoors in Human-Centric Language Models
Hidden Backdoors in Human-Centric Language Models
Shaofeng Li
Hui Liu
Tian Dong
Benjamin Zi Hao Zhao
Minhui Xue
Haojin Zhu
Jialiang Lu
SILM
40
147
0
01 May 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
40
8
0
16 Mar 2021
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural
  Backdoors
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors
Ren Pang
Zheng-Wei Zhang
Xiangshan Gao
Zhaohan Xi
S. Ji
Peng Cheng
Xiapu Luo
Ting Wang
AAML
29
31
0
16 Dec 2020
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor
  Attacks for Data Collection Scenarios
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios
Hassan Ali
Surya Nepal
S. Kanhere
S. Jha
AAML
27
12
0
14 Dec 2020
Privacy and Robustness in Federated Learning: Attacks and Defenses
Privacy and Robustness in Federated Learning: Attacks and Defenses
Lingjuan Lyu
Han Yu
Xingjun Ma
Chen Chen
Lichao Sun
Jun Zhao
Qiang Yang
Philip S. Yu
FedML
183
357
0
07 Dec 2020
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural
  Networks for Detection and Training Set Cleansing
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing
Zhen Xiang
David J. Miller
G. Kesidis
35
22
0
15 Oct 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
221
0
21 Jul 2020
Backdoor Learning: A Survey
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
45
592
0
17 Jul 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
46
298
0
08 May 2020
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
182
289
0
02 Dec 2018
1