Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.14430
Cited By
Adversarial Neuron Pruning Purifies Backdoored Deep Models
27 October 2021
Dongxian Wu
Yisen Wang
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Adversarial Neuron Pruning Purifies Backdoored Deep Models"
50 / 184 papers shown
Title
BaDExpert: Extracting Backdoor Functionality for Accurate Backdoor Input Detection
Tinghao Xie
Xiangyu Qi
Ping He
Yiming Li
Jiachen T. Wang
Prateek Mittal
AAML
25
9
0
23 Aug 2023
Protect Federated Learning Against Backdoor Attacks via Data-Free Trigger Generation
Yanxin Yang
Ming Hu
Yue Cao
Jun Xia
Yihao Huang
Yang Liu
Mingsong Chen
FedML
29
6
0
22 Aug 2023
Temporal-Distributed Backdoor Attack Against Video Based Action Recognition
Xi Li
Songhe Wang
Rui Huang
Mahanth K. Gowda
G. Kesidis
AAML
39
6
0
21 Aug 2023
Backdoor Mitigation by Correcting the Distribution of Neural Activations
Xi Li
Zhen Xiang
David J. Miller
G. Kesidis
AAML
13
0
0
18 Aug 2023
Test-Time Backdoor Defense via Detecting and Repairing
Jiyang Guan
Jian Liang
Ran He
AAML
33
0
0
11 Aug 2023
XGBD: Explanation-Guided Graph Backdoor Detection
Zihan Guan
Mengnan Du
Ninghao Liu
AAML
29
9
0
08 Aug 2023
Backdoor Federated Learning by Poisoning Backdoor-Critical Layers
Haomin Zhuang
Mingxian Yu
Hao Wang
Yang Hua
Jian Li
Xu Yuan
FedML
26
9
0
08 Aug 2023
Beating Backdoor Attack at Its Own Game
Min Liu
Alberto L. Sangiovanni-Vincentelli
Xiangyu Yue
AAML
65
11
0
28 Jul 2023
Samplable Anonymous Aggregation for Private Federated Data Analysis
Kunal Talwar
Shan Wang
Audra McMillan
Vojta Jina
Vitaly Feldman
...
Congzheng Song
Karl Tarbe
Sebastian Vogt
L. Winstrom
Shundong Zhou
FedML
38
13
0
27 Jul 2023
Backdoor Attacks against Voice Recognition Systems: A Survey
Baochen Yan
Jiahe Lan
Zheng Yan
AAML
30
8
0
23 Jul 2023
Adversarial Feature Map Pruning for Backdoor
Dong Huang
Qingwen Bu
AAML
35
4
0
21 Jul 2023
Neuron Sensitivity Guided Test Case Selection for Deep Learning Testing
Dong Huang
Qi Bu
Yichao Fu
Yuhao Qing
Junjie Chen
Heming Cui
AAML
34
2
0
20 Jul 2023
Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples
Shaokui Wei
Mingda Zhang
H. Zha
Baoyuan Wu
TPM
18
35
0
20 Jul 2023
Efficient Backdoor Removal Through Natural Gradient Fine-tuning
Nazmul Karim
Abdullah Al Arafat
Umar Khalid
Zhishan Guo
Naznin Rahnavard
AAML
26
1
0
30 Jun 2023
Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features
Mingli Zhu
Shaokui Wei
H. Zha
Baoyuan Wu
AAML
30
37
0
29 Jun 2023
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks
Ziqiang Li
Hong Sun
Pengfei Xia
Beihao Xia
Xue Rui
Wei Zhang
Qinglang Guo
Bin Li
AAML
32
8
0
14 Jun 2023
Versatile Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers
Ruotong Wang
Hongrui Chen
Zihao Zhu
Li Liu
Baoyuan Wu
DiffM
30
11
0
01 Jun 2023
UMD: Unsupervised Model Detection for X2X Backdoor Attacks
Zhen Xiang
Zidi Xiong
Bo-wen Li
AAML
26
20
0
29 May 2023
Differentially-Private Decision Trees and Provable Robustness to Data Poisoning
D. Vos
Jelle Vos
Tianyu Li
Z. Erkin
S. Verwer
FedML
16
1
0
24 May 2023
Reconstructive Neuron Pruning for Backdoor Defense
Yige Li
X. Lyu
Xingjun Ma
Nodens Koren
Lingjuan Lyu
Bo-wen Li
Yugang Jiang
AAML
32
42
0
24 May 2023
Backdoor Attack with Sparse and Invisible Trigger
Yinghua Gao
Yiming Li
Xueluan Gong
Zhifeng Li
Shutao Xia
Qianqian Wang
AAML
13
20
0
11 May 2023
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks
Nils Lukas
Florian Kerschbaum
36
1
0
07 May 2023
Enhancing Fine-Tuning Based Backdoor Defense with Sharpness-Aware Minimization
Mingli Zhu
Shaokui Wei
Li Shen
Yanbo Fan
Baoyuan Wu
AAML
36
51
0
24 Apr 2023
UNICORN: A Unified Backdoor Trigger Inversion Framework
Zhenting Wang
Kai Mei
Juan Zhai
Shiqing Ma
LLMSV
32
44
0
05 Apr 2023
Detecting Backdoors in Pre-trained Encoders
Shiwei Feng
Guanhong Tao
Shuyang Cheng
Guangyu Shen
Xiangzhe Xu
Yingqi Liu
Kaiyuan Zhang
Shiqing Ma
Xiangyu Zhang
82
47
0
23 Mar 2023
Backdoor Defense via Adaptively Splitting Poisoned Dataset
Kuofeng Gao
Yang Bai
Jindong Gu
Yong-Liang Yang
Shutao Xia
AAML
26
49
0
23 Mar 2023
Black-box Backdoor Defense via Zero-shot Image Purification
Yucheng Shi
Mengnan Du
Xuansheng Wu
Zihan Guan
Jin Sun
Ninghao Liu
40
28
0
21 Mar 2023
Influencer Backdoor Attack on Semantic Segmentation
Haoheng Lan
Jindong Gu
Philip Torr
Hengshuang Zhao
AAML
26
6
0
21 Mar 2023
AdaptGuard: Defending Against Universal Attacks for Model Adaptation
Lijun Sheng
Jian Liang
Ran He
Zilei Wang
Tien-Ping Tan
AAML
48
5
0
19 Mar 2023
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
Hritik Bansal
Nishad Singhi
Yu Yang
Fan Yin
Aditya Grover
Kai-Wei Chang
AAML
34
42
0
06 Mar 2023
FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases
Chong Fu
Xuhong Zhang
S. Ji
Ting Wang
Peng Lin
Yanghe Feng
Jianwei Yin
AAML
43
10
0
28 Feb 2023
SATBA: An Invisible Backdoor Attack Based On Spatial Attention
Huasong Zhou
Xiaowei Xu
Zhenyu Wang
Leon Bevan Bullock
AAML
27
1
0
25 Feb 2023
Analyzing And Editing Inner Mechanisms Of Backdoored Language Models
Max Lamparth
Anka Reuel
KELM
36
10
0
24 Feb 2023
Poisoning Web-Scale Training Datasets is Practical
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Andreas Terzis
Kurt Thomas
Florian Tramèr
SILM
31
182
0
20 Feb 2023
SCALE-UP: An Efficient Black-box Input-level Backdoor Detection via Analyzing Scaled Prediction Consistency
Junfeng Guo
Yiming Li
Xun Chen
Hanqing Guo
Lichao Sun
Cong Liu
AAML
MLAU
21
95
0
07 Feb 2023
Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks
Zeyu Qin
Liuyi Yao
Daoyuan Chen
Yaliang Li
Bolin Ding
Minhao Cheng
FedML
38
25
0
03 Feb 2023
Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks
Xiaoyun Xu
Oguzhan Ersoy
S. Picek
AAML
26
2
0
01 Feb 2023
BackdoorBox: A Python Toolbox for Backdoor Learning
Yiming Li
Mengxi Ya
Yang Bai
Yong Jiang
Shutao Xia
AAML
44
40
0
01 Feb 2023
Distilling Cognitive Backdoor Patterns within an Image
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
AAML
34
24
0
26 Jan 2023
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Shuyang Cheng
Guanhong Tao
Yingqi Liu
Shengwei An
Xiangzhe Xu
...
Guangyu Shen
Kaiyuan Zhang
Qiuling Xu
Shiqing Ma
Xiangyu Zhang
AAML
24
14
0
16 Jan 2023
Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition
Hasan Hammoud
Shuming Liu
Mohammad Alkhrashi
Fahad Albalawi
Guohao Li
AAML
32
8
0
03 Jan 2023
How to Backdoor Diffusion Models?
Sheng-Yen Chou
Pin-Yu Chen
Tsung-Yi Ho
DiffM
SILM
19
95
0
11 Dec 2022
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning
Marissa Connor
Vincent Emanuele
SILM
AAML
17
1
0
05 Dec 2022
Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Guanhong Tao
Zhenting Wang
Shuyang Cheng
Shiqing Ma
Shengwei An
Yingqi Liu
Guangyu Shen
Zhuo Zhang
Yunshu Mao
Xiangyu Zhang
SILM
19
17
0
29 Nov 2022
Backdoor Cleansing with Unlabeled Data
Lu Pang
Tao Sun
Haibin Ling
Chao Chen
AAML
37
18
0
22 Nov 2022
Invisible Backdoor Attack with Dynamic Triggers against Person Re-identification
Wenli Sun
Xinyang Jiang
Shuguang Dou
Dongsheng Li
Duoqian Miao
Cheng Deng
Cairong Zhao
AAML
29
8
0
20 Nov 2022
Backdoor Attacks on Time Series: A Generative Approach
Yujing Jiang
Xingjun Ma
S. Erfani
James Bailey
AAML
AI4TS
28
12
0
15 Nov 2022
M-to-N Backdoor Paradigm: A Multi-Trigger and Multi-Target Attack to Deep Learning Models
Linshan Hou
Zhongyun Hua
Yuhong Li
Yifeng Zheng
Leo Yu Zhang
AAML
26
2
0
03 Nov 2022
Untargeted Backdoor Attack against Object Detection
C. Luo
Yiming Li
Yong Jiang
Shutao Xia
AAML
28
31
0
02 Nov 2022
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Khoa D. Doan
Yingjie Lao
Ping Li
34
40
0
17 Oct 2022
Previous
1
2
3
4
Next