ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.03728
  4. Cited By
Detecting Backdoor Attacks on Deep Neural Networks by Activation
  Clustering

Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering

9 November 2018
Bryant Chen
Wilka Carvalho
Wenjie Li
Heiko Ludwig
Benjamin Edwards
Chengyao Chen
Ziqiang Cao
Biplav Srivastava
    AAML
ArXivPDFHTML

Papers citing "Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering"

50 / 175 papers shown
Title
A Chaos Driven Metric for Backdoor Attack Detection
A Chaos Driven Metric for Backdoor Attack Detection
Hema Karnam Surendrababu
Nithin Nagaraj
AAML
41
0
0
06 May 2025
SFIBA: Spatial-based Full-target Invisible Backdoor Attacks
SFIBA: Spatial-based Full-target Invisible Backdoor Attacks
Yangxu Yin
H. Chen
Yudong Gao
Peng Sun
Zhiyu Li
Wen Liu
AAML
45
0
0
29 Apr 2025
Secure Transfer Learning: Training Clean Models Against Backdoor in (Both) Pre-trained Encoders and Downstream Datasets
Secure Transfer Learning: Training Clean Models Against Backdoor in (Both) Pre-trained Encoders and Downstream Datasets
Wenjie Qu
Yuxuan Zhou
Tianyu Li
Minghui Li
Shengshan Hu
Wei Luo
L. Zhang
AAML
SILM
43
0
0
16 Apr 2025
A Dual-Purpose Framework for Backdoor Defense and Backdoor Amplification in Diffusion Models
A Dual-Purpose Framework for Backdoor Defense and Backdoor Amplification in Diffusion Models
Vu Tuan Truong Long
Bao Le
DiffM
AAML
276
0
0
26 Feb 2025
PCAP-Backdoor: Backdoor Poisoning Generator for Network Traffic in CPS/IoT Environments
PCAP-Backdoor: Backdoor Poisoning Generator for Network Traffic in CPS/IoT Environments
Ajesh Koyatan Chathoth
Stephen Lee
42
0
0
26 Jan 2025
A Backdoor Attack Scheme with Invisible Triggers Based on Model Architecture Modification
A Backdoor Attack Scheme with Invisible Triggers Based on Model Architecture Modification
Yuan Ma
Xu Ma
Jiankang Wei
Jinmeng Tang
Xiaoyu Zhang
Yilun Lyu
Kehao Chen
Jingtong Huang
88
0
0
22 Dec 2024
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion Models
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion Models
Yuning Han
Bingyin Zhao
Rui Chu
Feng Luo
Biplab Sikdar
Yingjie Lao
DiffM
AAML
98
1
0
16 Dec 2024
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense Evaluation
Haiyang Yu
Tian Xie
Jiaping Gui
Pengyang Wang
P. Yi
Yue Wu
56
1
0
17 Nov 2024
Game-Theoretic Defenses for Robust Conformal Prediction Against Adversarial Attacks in Medical Imaging
Game-Theoretic Defenses for Robust Conformal Prediction Against Adversarial Attacks in Medical Imaging
Rui Luo
Jie Bao
Zhixin Zhou
Chuangyin Dang
MedIm
AAML
45
5
0
07 Nov 2024
Persistent Backdoor Attacks in Continual Learning
Persistent Backdoor Attacks in Continual Learning
Zhen Guo
Abhinav Kumar
R. Tourani
AAML
35
3
0
20 Sep 2024
Attacks and Defenses for Generative Diffusion Models: A Comprehensive
  Survey
Attacks and Defenses for Generative Diffusion Models: A Comprehensive Survey
V. T. Truong
Luan Ba Dang
Long Bao Le
DiffM
MedIm
58
17
0
06 Aug 2024
Wicked Oddities: Selectively Poisoning for Effective Clean-Label
  Backdoor Attacks
Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks
Quang H. Nguyen
Nguyen Ngoc-Hieu
The-Anh Ta
Thanh Nguyen-Tang
Kok-Seng Wong
Hoang Thanh-Tung
Khoa D. Doan
AAML
33
2
0
15 Jul 2024
Model-agnostic clean-label backdoor mitigation in cybersecurity environments
Model-agnostic clean-label backdoor mitigation in cybersecurity environments
Giorgio Severi
Simona Boboila
J. Holodnak
K. Kratkiewicz
Rauf Izmailov
Alina Oprea
Alina Oprea
AAML
35
1
0
11 Jul 2024
DLP: towards active defense against backdoor attacks with decoupled
  learning process
DLP: towards active defense against backdoor attacks with decoupled learning process
Zonghao Ying
Bin Wu
AAML
46
6
0
18 Jun 2024
An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion
  Models: Injecting Disguised Vulnerabilities against Strong Detection
An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection
Shenao Yan
Shen Wang
Yue Duan
Hanbin Hong
Kiho Lee
Doowon Kim
Yuan Hong
AAML
SILM
43
17
0
10 Jun 2024
PSBD: Prediction Shift Uncertainty Unlocks Backdoor Detection
PSBD: Prediction Shift Uncertainty Unlocks Backdoor Detection
Wei Li
Pin-Yu Chen
Sijia Liu
Ren Wang
AAML
49
3
0
09 Jun 2024
Stress-Testing Capability Elicitation With Password-Locked Models
Stress-Testing Capability Elicitation With Password-Locked Models
Ryan Greenblatt
Fabien Roger
Dmitrii Krasheninnikov
David M. Krueger
43
14
0
29 May 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of
  Energy-Based Models
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
31
0
0
28 May 2024
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks
Zhenxing Niu
Yuyao Sun
Qiguang Miao
Rong Jin
Gang Hua
AAML
44
6
0
28 May 2024
Partial train and isolate, mitigate backdoor attack
Partial train and isolate, mitigate backdoor attack
Yong Li
Han Gao
AAML
34
0
0
26 May 2024
IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling
  Consistency
IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency
Linshan Hou
Ruili Feng
Zhongyun Hua
Wei Luo
Leo Yu Zhang
Yiming Li
AAML
46
19
0
16 May 2024
LSP Framework: A Compensatory Model for Defeating Trigger Reverse
  Engineering via Label Smoothing Poisoning
LSP Framework: A Compensatory Model for Defeating Trigger Reverse Engineering via Label Smoothing Poisoning
Beichen Li
Yuanfang Guo
Heqi Peng
Yangxi Li
Yun-an Wang
26
0
0
19 Apr 2024
Model Pairing Using Embedding Translation for Backdoor Attack Detection
  on Open-Set Classification Tasks
Model Pairing Using Embedding Translation for Backdoor Attack Detection on Open-Set Classification Tasks
A. Unnervik
Hatef Otroshi-Shahreza
Anjith George
S´ebastien Marcel
AAML
SILM
43
0
0
28 Feb 2024
Low-Frequency Black-Box Backdoor Attack via Evolutionary Algorithm
Low-Frequency Black-Box Backdoor Attack via Evolutionary Algorithm
Yanqi Qiao
Dazhuang Liu
Rui Wang
Kaitai Liang
AAML
28
1
0
23 Feb 2024
On the Difficulty of Defending Contrastive Learning against Backdoor
  Attacks
On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Changjiang Li
Ren Pang
Bochuan Cao
Zhaohan Xi
Jinghui Chen
Shouling Ji
Ting Wang
AAML
40
6
0
14 Dec 2023
Prototypical Self-Explainable Models Without Re-training
Prototypical Self-Explainable Models Without Re-training
Srishti Gautam
Ahcène Boubekki
Marina M.-C. Höhne
Michael C. Kampffmeyer
34
2
0
13 Dec 2023
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor
  Attack
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack
Sze Jue Yang
Q. Nguyen
Chee Seng Chan
Khoa D. Doan
AAML
DiffM
32
0
0
31 Aug 2023
Benchmarks for Detecting Measurement Tampering
Benchmarks for Detecting Measurement Tampering
Fabien Roger
Ryan Greenblatt
Max Nadeau
Buck Shlegeris
Nate Thomas
28
2
0
29 Aug 2023
Adaptive White-Box Watermarking with Self-Mutual Check Parameters in
  Deep Neural Networks
Adaptive White-Box Watermarking with Self-Mutual Check Parameters in Deep Neural Networks
Zhenzhe Gao
Z. Yin
Hongjian Zhan
Heng Yin
Yue Lu
AAML
23
0
0
22 Aug 2023
XGBD: Explanation-Guided Graph Backdoor Detection
XGBD: Explanation-Guided Graph Backdoor Detection
Zihan Guan
Mengnan Du
Ninghao Liu
AAML
32
9
0
08 Aug 2023
Beating Backdoor Attack at Its Own Game
Beating Backdoor Attack at Its Own Game
Min Liu
Alberto L. Sangiovanni-Vincentelli
Xiangyu Yue
AAML
65
11
0
28 Jul 2023
A LLM Assisted Exploitation of AI-Guardian
A LLM Assisted Exploitation of AI-Guardian
Nicholas Carlini
ELM
SILM
24
15
0
20 Jul 2023
Adversarial Learning in Real-World Fraud Detection: Challenges and
  Perspectives
Adversarial Learning in Real-World Fraud Detection: Challenges and Perspectives
Daniele Lunghi
A. Simitsis
O. Caelen
Gianluca Bontempi
AAML
FaML
42
4
0
03 Jul 2023
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via
  Restricted Adversarial Distillation
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via Restricted Adversarial Distillation
Zhicong Yan
Shenghong Li
Ruijie Zhao
Yuan Tian
Yuanyuan Zhao
AAML
39
11
0
13 Jun 2023
Poisoning Network Flow Classifiers
Poisoning Network Flow Classifiers
Giorgio Severi
Simona Boboila
Alina Oprea
J. Holodnak
K. Kratkiewicz
J. Matterer
AAML
40
4
0
02 Jun 2023
Mitigating Backdoor Poisoning Attacks through the Lens of Spurious
  Correlation
Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation
Xuanli He
Qiongkai Xu
Jun Wang
Benjamin I. P. Rubinstein
Trevor Cohn
AAML
37
18
0
19 May 2023
A Survey of Safety and Trustworthiness of Large Language Models through
  the Lens of Verification and Validation
A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation
Xiaowei Huang
Wenjie Ruan
Wei Huang
Gao Jin
Yizhen Dong
...
Sihao Wu
Peipei Xu
Dengyu Wu
André Freitas
Mustafa A. Mustafa
ALM
49
83
0
19 May 2023
Evil from Within: Machine Learning Backdoors through Hardware Trojans
Evil from Within: Machine Learning Backdoors through Hardware Trojans
Alexander Warnecke
Julian Speith
Janka Möller
Konrad Rieck
C. Paar
AAML
24
3
0
17 Apr 2023
UNICORN: A Unified Backdoor Trigger Inversion Framework
UNICORN: A Unified Backdoor Trigger Inversion Framework
Zhenting Wang
Kai Mei
Juan Zhai
Shiqing Ma
LLMSV
39
44
0
05 Apr 2023
Defending Against Patch-based Backdoor Attacks on Self-Supervised
  Learning
Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Ajinkya Tejankar
Maziar Sanjabi
Qifan Wang
Sinong Wang
Hamed Firooz
Hamed Pirsiavash
L Tan
AAML
30
19
0
04 Apr 2023
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive
  Learning
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
Hritik Bansal
Nishad Singhi
Yu Yang
Fan Yin
Aditya Grover
Kai-Wei Chang
AAML
34
42
0
06 Mar 2023
Defending Against Backdoor Attacks by Layer-wise Feature Analysis
Defending Against Backdoor Attacks by Layer-wise Feature Analysis
N. Jebreel
J. Domingo-Ferrer
Yiming Li
AAML
33
10
0
24 Feb 2023
ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep
  Learning Paradigms
ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms
Minzhou Pan
Yi Zeng
Lingjuan Lyu
X. Lin
R. Jia
AAML
29
35
0
22 Feb 2023
Poisoning Web-Scale Training Datasets is Practical
Poisoning Web-Scale Training Datasets is Practical
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Andreas Terzis
Kurt Thomas
Florian Tramèr
SILM
31
182
0
20 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
33
20
0
14 Feb 2023
Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural
  Networks with Neuromorphic Data
Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data
Gorka Abad
Oguzhan Ersoy
S. Picek
A. Urbieta
AAML
23
17
0
13 Feb 2023
Universal Soldier: Using Universal Adversarial Perturbations for
  Detecting Backdoor Attacks
Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks
Xiaoyun Xu
Oguzhan Ersoy
S. Picek
AAML
32
2
0
01 Feb 2023
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering
Rui Zhu
Di Tang
Siyuan Tang
Guanhong Tao
Shiqing Ma
Xiaofeng Wang
Haixu Tang
DD
23
3
0
29 Jan 2023
Threats, Vulnerabilities, and Controls of Machine Learning Based
  Systems: A Survey and Taxonomy
Threats, Vulnerabilities, and Controls of Machine Learning Based Systems: A Survey and Taxonomy
Yusuke Kawamoto
Kazumasa Miyake
K. Konishi
Y. Oiwa
29
4
0
18 Jan 2023
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Shuyang Cheng
Guanhong Tao
Yingqi Liu
Shengwei An
Xiangzhe Xu
...
Guangyu Shen
Kaiyuan Zhang
Qiuling Xu
Shiqing Ma
Xiangyu Zhang
AAML
24
15
0
16 Jan 2023
1234
Next