ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.14430
  4. Cited By
Adversarial Neuron Pruning Purifies Backdoored Deep Models

Adversarial Neuron Pruning Purifies Backdoored Deep Models

27 October 2021
Dongxian Wu
Yisen Wang
    AAML
ArXivPDFHTML

Papers citing "Adversarial Neuron Pruning Purifies Backdoored Deep Models"

50 / 184 papers shown
Title
Propaganda via AI? A Study on Semantic Backdoors in Large Language Models
Propaganda via AI? A Study on Semantic Backdoors in Large Language Models
Nay Myat Min
Long H. Pham
Yige Li
Jun Sun
AAML
30
0
0
15 Apr 2025
Defending Deep Neural Networks against Backdoor Attacks via Module Switching
Defending Deep Neural Networks against Backdoor Attacks via Module Switching
Weijun Li
Ansh Arora
Xuanli He
Mark Dras
Qiongkai Xu
AAML
MoMe
53
0
0
08 Apr 2025
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data
Dorde Popovic
Amin Sadeghi
Ting Yu
Sanjay Chawla
Issa M. Khalil
AAML
49
0
0
27 Mar 2025
Prototype Guided Backdoor Defense
Prototype Guided Backdoor Defense
Venkat Adithya Amula
Sunayana Samavedam
Saurabh Saini
Avani Gupta
Narayanan P J
AAML
50
0
0
26 Mar 2025
Seal Your Backdoor with Variational Defense
Ivan Sabolić
Matej Grcić
Sinisa Segvic
AAML
159
0
0
11 Mar 2025
NaviDet: Efficient Input-level Backdoor Detection on Text-to-Image Synthesis via Neuron Activation Variation
Shengfang Zhai
Jiajun Li
Yue Liu
Huanran Chen
Zhihua Tian
Wenjie Qu
Qingni Shen
Ruoxi Jia
Yinpeng Dong
Jiaheng Zhang
AAML
49
0
0
09 Mar 2025
SecureGaze: Defending Gaze Estimation Against Backdoor Attacks
SecureGaze: Defending Gaze Estimation Against Backdoor Attacks
Lingyu Du
Yupei Liu
Jinyuan Jia
Guohao Lan
AAML
33
0
0
27 Feb 2025
Neural Antidote: Class-Wise Prompt Tuning for Purifying Backdoors in Pre-trained Vision-Language Models
Neural Antidote: Class-Wise Prompt Tuning for Purifying Backdoors in Pre-trained Vision-Language Models
Jiawei Kong
Hao Fang
Sihang Guo
Chenxi Qing
Bin Chen
Bin Wang
Shu-Tao Xia
AAML
VLM
90
0
0
26 Feb 2025
Class-Conditional Neural Polarizer: A Lightweight and Effective Backdoor Defense by Purifying Poisoned Features
Class-Conditional Neural Polarizer: A Lightweight and Effective Backdoor Defense by Purifying Poisoned Features
Mingli Zhu
Shaokui Wei
Hongyuan Zha
Baoyuan Wu
AAML
44
0
0
23 Feb 2025
BackdoorDM: A Comprehensive Benchmark for Backdoor Learning in Diffusion Model
BackdoorDM: A Comprehensive Benchmark for Backdoor Learning in Diffusion Model
Weilin Lin
Nanjun Zhou
Y. Wang
Jianze Li
Hui Xiong
Li Liu
AAML
DiffM
175
0
0
17 Feb 2025
MADE: Graph Backdoor Defense with Masked Unlearning
MADE: Graph Backdoor Defense with Masked Unlearning
Xiao Lin amd Mingjie Li
Mingjie Li
Yisen Wang
AAML
95
1
0
03 Jan 2025
Cut the Deadwood Out: Post-Training Model Purification with Selective Module Substitution
Cut the Deadwood Out: Post-Training Model Purification with Selective Module Substitution
Yao Tong
Weijun Li
Xuanli He
Haolan Zhan
Qiongkai Xu
AAML
35
1
0
31 Dec 2024
Defending Multimodal Backdoored Models by Repulsive Visual Prompt Tuning
Defending Multimodal Backdoored Models by Repulsive Visual Prompt Tuning
Zhifang Zhang
Shuo He
Bingquan Shen
Lei Feng
Lei Feng
AAML
55
0
0
29 Dec 2024
A Backdoor Attack Scheme with Invisible Triggers Based on Model Architecture Modification
A Backdoor Attack Scheme with Invisible Triggers Based on Model Architecture Modification
Yuan Ma
Xu Ma
Jiankang Wei
Jinmeng Tang
Xiaoyu Zhang
Yilun Lyu
Kehao Chen
Jingtong Huang
88
0
0
22 Dec 2024
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion Models
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion Models
Yuning Han
Bingyin Zhao
Rui Chu
Feng Luo
Biplab Sikdar
Yingjie Lao
DiffM
AAML
86
1
0
16 Dec 2024
Data Free Backdoor Attacks
Data Free Backdoor Attacks
Bochuan Cao
Jinyuan Jia
Chuxuan Hu
Wenbo Guo
Zhen Xiang
Jinghui Chen
Bo-wen Li
Dawn Song
AAML
81
0
0
09 Dec 2024
Robust and Transferable Backdoor Attacks Against Deep Image Compression
  With Selective Frequency Prior
Robust and Transferable Backdoor Attacks Against Deep Image Compression With Selective Frequency Prior
Yi Yu
Yufei Wang
Wenhan Yang
Lanqing Guo
Shijian Lu
Ling-yu Duan
Yap-Peng Tan
Alex C. Kot
AAML
86
4
0
02 Dec 2024
FLARE: Towards Universal Dataset Purification against Backdoor Attacks
FLARE: Towards Universal Dataset Purification against Backdoor Attacks
Linshan Hou
Wei Luo
Zhongyun Hua
Songhua Chen
L. Zhang
Yiming Li
AAML
72
0
0
29 Nov 2024
Semantic Shield: Defending Vision-Language Models Against Backdooring
  and Poisoning via Fine-grained Knowledge Alignment
Semantic Shield: Defending Vision-Language Models Against Backdooring and Poisoning via Fine-grained Knowledge Alignment
Alvi Md Ishmam
Christopher Thomas
AAML
121
3
0
23 Nov 2024
Reliable Poisoned Sample Detection against Backdoor Attacks Enhanced by Sharpness Aware Minimization
Mingda Zhang
Mingli Zhu
Zihao Zhu
Baoyuan Wu
AAML
78
1
0
18 Nov 2024
CROW: Eliminating Backdoors from Large Language Models via Internal
  Consistency Regularization
CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization
Nay Myat Min
Long H. Pham
Yige Li
Tianlong Chen
AAML
66
4
0
18 Nov 2024
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense Evaluation
Haiyang Yu
Tian Xie
Jiaping Gui
Pengyang Wang
P. Yi
Yue Wu
53
1
0
17 Nov 2024
Defending Deep Regression Models against Backdoor Attacks
Defending Deep Regression Models against Backdoor Attacks
Lingyu Du
Yupei Liu
Jinyuan Jia
Guohao Lan
AAML
31
1
0
07 Nov 2024
Identify Backdoored Model in Federated Learning via Individual
  Unlearning
Identify Backdoored Model in Federated Learning via Individual Unlearning
Jiahao Xu
Zikai Zhang
Rui Hu
FedML
AAML
62
1
0
01 Nov 2024
Expose Before You Defend: Unifying and Enhancing Backdoor Defenses via
  Exposed Models
Expose Before You Defend: Unifying and Enhancing Backdoor Defenses via Exposed Models
Yige Li
Hanxun Huang
Jiaming Zhang
Xingjun Ma
Yu-Gang Jiang
AAML
35
2
0
25 Oct 2024
Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware Subspace
Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware Subspace
Jinluan Yang
Anke Tang
Didi Zhu
Zhengyu Chen
Li Shen
Fei Wu
MoMe
AAML
62
3
0
17 Oct 2024
Adversarially Guided Stateful Defense Against Backdoor Attacks in
  Federated Deep Learning
Adversarially Guided Stateful Defense Against Backdoor Attacks in Federated Deep Learning
Hassan Ali
Surya Nepal
S. Kanhere
S. Jha
AAML
FedML
26
1
0
15 Oct 2024
Uncovering, Explaining, and Mitigating the Superficial Safety of
  Backdoor Defense
Uncovering, Explaining, and Mitigating the Superficial Safety of Backdoor Defense
Rui Min
Zeyu Qin
Nevin L. Zhang
Li Shen
Minhao Cheng
AAML
36
4
0
13 Oct 2024
Using Interleaved Ensemble Unlearning to Keep Backdoors at Bay for
  Finetuning Vision Transformers
Using Interleaved Ensemble Unlearning to Keep Backdoors at Bay for Finetuning Vision Transformers
Zeyu Michael Li
AAML
26
0
0
01 Oct 2024
BadHMP: Backdoor Attack against Human Motion Prediction
BadHMP: Backdoor Attack against Human Motion Prediction
Chaohui Xu
Si Wang
Chip-Hong Chang
AAML
35
0
0
29 Sep 2024
Psychometrics for Hypnopaedia-Aware Machinery via Chaotic Projection of
  Artificial Mental Imagery
Psychometrics for Hypnopaedia-Aware Machinery via Chaotic Projection of Artificial Mental Imagery
Ching-Chun Chang
Kai Gao
Shuying Xu
Anastasia Kordoni
Christopher Leckie
Isao Echizen
24
0
0
29 Sep 2024
Adversarial Backdoor Defense in CLIP
Adversarial Backdoor Defense in CLIP
Junhao Kuang
Siyuan Liang
Jiawei Liang
Kuanrong Liu
Xiaochun Cao
AAML
36
2
0
24 Sep 2024
Obliviate: Neutralizing Task-agnostic Backdoors within the
  Parameter-efficient Fine-tuning Paradigm
Obliviate: Neutralizing Task-agnostic Backdoors within the Parameter-efficient Fine-tuning Paradigm
Jaehan Kim
Minkyoo Song
S. Na
Seungwon Shin
AAML
35
0
0
21 Sep 2024
Data Poisoning and Leakage Analysis in Federated Learning
Data Poisoning and Leakage Analysis in Federated Learning
Wenqi Wei
Tiansheng Huang
Zachary Yahn
Anoop Singhal
Margaret Loper
Ling Liu
FedML
SILM
33
0
0
19 Sep 2024
On the Weaknesses of Backdoor-based Model Watermarking: An
  Information-theoretic Perspective
On the Weaknesses of Backdoor-based Model Watermarking: An Information-theoretic Perspective
Aoting Hu
Yanzhi Chen
Renjie Xie
Adrian Weller
38
0
0
10 Sep 2024
TERD: A Unified Framework for Safeguarding Diffusion Models Against
  Backdoors
TERD: A Unified Framework for Safeguarding Diffusion Models Against Backdoors
Yichuan Mo
Hui Huang
Mingjie Li
Ang Li
Yisen Wang
AAML
DiffM
32
16
0
09 Sep 2024
Fisher Information guided Purification against Backdoor Attacks
Fisher Information guided Purification against Backdoor Attacks
Nazmul Karim
Abdullah Al Arafat
Adnan Siraj Rakin
Zhishan Guo
Nazanin Rahnavard
AAML
51
1
0
01 Sep 2024
Fusing Pruned and Backdoored Models: Optimal Transport-based Data-free
  Backdoor Mitigation
Fusing Pruned and Backdoored Models: Optimal Transport-based Data-free Backdoor Mitigation
Weilin Lin
Li Liu
Jianze Li
Hui Xiong
AAML
51
1
0
28 Aug 2024
VFLIP: A Backdoor Defense for Vertical Federated Learning via
  Identification and Purification
VFLIP: A Backdoor Defense for Vertical Federated Learning via Identification and Purification
Yungi Cho
Woorim Han
Miseon Yu
Younghan Lee
Ho Bae
Y. Paek
AAML
FedML
32
0
0
28 Aug 2024
Protecting against simultaneous data poisoning attacks
Protecting against simultaneous data poisoning attacks
Neel Alex
Shoaib Ahmed Siddiqui
Amartya Sanyal
David M. Krueger
AAML
42
1
0
23 Aug 2024
Mitigating Backdoor Attacks in Federated Learning via Flipping Weight
  Updates of Low-Activation Input Neurons
Mitigating Backdoor Attacks in Federated Learning via Flipping Weight Updates of Low-Activation Input Neurons
Binbin Ding
Penghui Yang
Zeqing Ge
Shengjun Huang
AAML
FedML
39
0
0
16 Aug 2024
BadMerging: Backdoor Attacks Against Model Merging
BadMerging: Backdoor Attacks Against Model Merging
Jinghuai Zhang
Jianfeng Chi
Zheng Li
Kunlin Cai
Yang Zhang
Yuan Tian
MoMe
FedML
AAML
44
14
0
14 Aug 2024
Revocable Backdoor for Deep Model Trading
Revocable Backdoor for Deep Model Trading
Yiran Xu
Nan Zhong
Zhenxing Qian
Xinpeng Zhang
AAML
35
0
0
01 Aug 2024
Flatness-aware Sequential Learning Generates Resilient Backdoors
Flatness-aware Sequential Learning Generates Resilient Backdoors
Hoang Pham
The-Anh Ta
Anh Tran
Khoa D. Doan
FedML
AAML
36
0
0
20 Jul 2024
UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening
UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening
Shuyang Cheng
Guangyu Shen
Kaiyuan Zhang
Guanhong Tao
Shengwei An
Hanxi Guo
Shiqing Ma
Xiangyu Zhang
AAML
31
0
0
16 Jul 2024
Augmented Neural Fine-Tuning for Efficient Backdoor Purification
Augmented Neural Fine-Tuning for Efficient Backdoor Purification
Nazmul Karim
Abdullah Al Arafat
Umar Khalid
Zhishan Guo
Nazanin Rahnavard
AAML
40
0
0
14 Jul 2024
CBPF: Filtering Poisoned Data Based on Composite Backdoor Attack
CBPF: Filtering Poisoned Data Based on Composite Backdoor Attack
Hanfeng Xia
Haibo Hong
Ruili Wang
AAML
63
0
0
23 Jun 2024
Composite Concept Extraction through Backdooring
Composite Concept Extraction through Backdooring
Banibrata Ghosh
Haripriya Harikumar
Khoa D. Doan
Svetha Venkatesh
Santu Rana
31
0
0
19 Jun 2024
DLP: towards active defense against backdoor attacks with decoupled
  learning process
DLP: towards active defense against backdoor attacks with decoupled learning process
Zonghao Ying
Bin Wu
AAML
46
6
0
18 Jun 2024
CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models
CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models
Yuetai Li
Zhangchen Xu
Fengqing Jiang
Luyao Niu
D. Sahabandu
Bhaskar Ramasubramanian
Radha Poovendran
SILM
AAML
62
7
0
18 Jun 2024
1234
Next