ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.10562
  4. Cited By
Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared
  Adversarial Examples

Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples

20 July 2023
Shaokui Wei
Ruotong Wang
H. Zha
Baoyuan Wu
    TPM
ArXivPDFHTML

Papers citing "Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples"

50 / 53 papers shown
Title
Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware Subspace
Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware Subspace
Jinluan Yang
Anke Tang
Didi Zhu
Zhengyu Chen
Li Shen
Leilei Gan
MoMe
AAML
112
4
0
17 Oct 2024
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Ziyao Liu
Huanyi Ye
Chen Chen
Yongsen Zheng
K. Lam
AAML
MU
63
30
0
20 Mar 2024
MultiDelete for Multimodal Machine Unlearning
MultiDelete for Multimodal Machine Unlearning
Jiali Cheng
Hadi Amiri
MU
77
7
0
18 Nov 2023
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection
  Strategy
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy
Zihao Zhu
Ruotong Wang
Shaokui Wei
Li Shen
Yanbo Fan
Baoyuan Wu
AAML
SILM
86
10
0
14 Jul 2023
Neural Polarizer: A Lightweight and Effective Backdoor Defense via
  Purifying Poisoned Features
Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features
Mingli Zhu
Shaokui Wei
H. Zha
Baoyuan Wu
AAML
48
37
0
29 Jun 2023
Reconstructive Neuron Pruning for Backdoor Defense
Reconstructive Neuron Pruning for Backdoor Defense
Yige Li
X. Lyu
Xingjun Ma
Nodens Koren
Lingjuan Lyu
Yue Liu
Yugang Jiang
AAML
56
44
0
24 May 2023
Enhancing Fine-Tuning Based Backdoor Defense with Sharpness-Aware
  Minimization
Enhancing Fine-Tuning Based Backdoor Defense with Sharpness-Aware Minimization
Mingli Zhu
Shaokui Wei
Li Shen
Yanbo Fan
Baoyuan Wu
AAML
56
53
0
24 Apr 2023
SCALE-UP: An Efficient Black-box Input-level Backdoor Detection via
  Analyzing Scaled Prediction Consistency
SCALE-UP: An Efficient Black-box Input-level Backdoor Detection via Analyzing Scaled Prediction Consistency
Junfeng Guo
Yiming Li
Xun Chen
Hanqing Guo
Lichao Sun
Cong Liu
AAML
MLAU
41
101
0
07 Feb 2023
Imperceptible and Robust Backdoor Attack in 3D Point Cloud
Imperceptible and Robust Backdoor Attack in 3D Point Cloud
Kuofeng Gao
Jiawang Bai
Baoyuan Wu
Mengxi Ya
Shutao Xia
AAML
3DPC
65
32
0
17 Aug 2022
Data-free Backdoor Removal based on Channel Lipschitzness
Data-free Backdoor Removal based on Channel Lipschitzness
Runkai Zheng
Rong Tang
Jianze Li
Li Liu
AAML
39
105
0
05 Aug 2022
Prior-Guided Adversarial Initialization for Fast Adversarial Training
Prior-Guided Adversarial Initialization for Fast Adversarial Training
Xiaojun Jia
Yong Zhang
Xingxing Wei
Baoyuan Wu
Ke Ma
Jue Wang
Xiaochun Cao
AAML
56
38
0
18 Jul 2022
One-shot Neural Backdoor Erasing via Adversarial Weight Masking
One-shot Neural Backdoor Erasing via Adversarial Weight Masking
Shuwen Chai
Jinghui Chen
AAML
56
34
0
10 Jul 2022
Removing Batch Normalization Boosts Adversarial Training
Removing Batch Normalization Boosts Adversarial Training
Haotao Wang
Aston Zhang
Shuai Zheng
Xingjian Shi
Mu Li
Zhangyang Wang
70
42
0
04 Jul 2022
BackdoorBench: A Comprehensive Benchmark of Backdoor Learning
BackdoorBench: A Comprehensive Benchmark of Backdoor Learning
Baoyuan Wu
Hongrui Chen
Ruotong Wang
Zihao Zhu
Shaokui Wei
Danni Yuan
Chaoxiao Shen
ELM
AAML
55
141
0
25 Jun 2022
BagFlip: A Certified Defense against Data Poisoning
BagFlip: A Certified Defense against Data Poisoning
Yuhao Zhang
Aws Albarghouthi
Loris Dántoni
AAML
49
24
0
26 May 2022
Towards A Proactive ML Approach for Detecting Backdoor Poison Samples
Towards A Proactive ML Approach for Detecting Backdoor Poison Samples
Xiangyu Qi
Tinghao Xie
Jiachen T. Wang
Tong Wu
Saeed Mahloujifar
Prateek Mittal
AAML
32
49
0
26 May 2022
LAS-AT: Adversarial Training with Learnable Attack Strategy
LAS-AT: Adversarial Training with Learnable Attack Strategy
Xiaojun Jia
Yong Zhang
Baoyuan Wu
Ke Ma
Jue Wang
Xiaochun Cao
AAML
58
134
0
13 Mar 2022
On the Effectiveness of Adversarial Training against Backdoor Attacks
On the Effectiveness of Adversarial Training against Backdoor Attacks
Yinghua Gao
Dongxian Wu
Jingfeng Zhang
Guanhao Gan
Shutao Xia
Gang Niu
Masashi Sugiyama
AAML
46
22
0
22 Feb 2022
Progressive Backdoor Erasing via connecting Backdoor and Adversarial
  Attacks
Progressive Backdoor Erasing via connecting Backdoor and Adversarial Attacks
Bingxu Mu
Zhenxing Niu
Le Wang
Xue Wang
Rong Jin
G. Hua
AAML
32
15
0
13 Feb 2022
Backdoor Defense via Decoupling the Training Process
Backdoor Defense via Decoupling the Training Process
Kunzhe Huang
Yiming Li
Baoyuan Wu
Zhan Qin
Kui Ren
AAML
FedML
47
191
0
05 Feb 2022
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value
  Analysis
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis
Junfeng Guo
Ang Li
Cong Liu
AAML
85
75
0
28 Oct 2021
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Dongxian Wu
Yisen Wang
AAML
77
278
0
27 Oct 2021
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Yige Li
X. Lyu
Nodens Koren
Lingjuan Lyu
Yue Liu
Xingjun Ma
OnRL
59
330
0
22 Oct 2021
Boosting Fast Adversarial Training with Learnable Adversarial
  Initialization
Boosting Fast Adversarial Training with Learnable Adversarial Initialization
Xiaojun Jia
Yong Zhang
Baoyuan Wu
Jue Wang
Xiaochun Cao
AAML
66
54
0
11 Oct 2021
Adversarial Unlearning of Backdoors via Implicit Hypergradient
Adversarial Unlearning of Backdoors via Implicit Hypergradient
Yi Zeng
Si-An Chen
Won Park
Z. Morley Mao
Ming Jin
R. Jia
AAML
104
174
0
07 Oct 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks
  Trained from Scratch
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
53
127
0
16 Jun 2021
Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective
Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective
Yi Zeng
Won Park
Z. Morley Mao
R. Jia
AAML
25
210
0
07 Apr 2021
Black-box Detection of Backdoor Attacks with Limited Information and
  Data
Black-box Detection of Backdoor Attacks with Limited Information and Data
Yinpeng Dong
Xiao Yang
Zhijie Deng
Tianyu Pang
Zihao Xiao
Hang Su
Jun Zhu
AAML
57
113
0
24 Mar 2021
Neural Attention Distillation: Erasing Backdoor Triggers from Deep
  Neural Networks
Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks
Yige Li
Lingjuan Lyu
Nodens Koren
X. Lyu
Yue Liu
Xingjun Ma
AAML
FedML
63
433
0
15 Jan 2021
Invisible Backdoor Attack with Sample-Specific Triggers
Invisible Backdoor Attack with Sample-Specific Triggers
Yuezun Li
Yiming Li
Baoyuan Wu
Longkang Li
Ran He
Siwei Lyu
AAML
DiffM
74
474
0
07 Dec 2020
Certified Robustness of Nearest Neighbors against Data Poisoning and
  Backdoor Attacks
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks
Jinyuan Jia
Yupei Liu
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
59
74
0
07 Dec 2020
Input-Aware Dynamic Backdoor Attack
Input-Aware Dynamic Backdoor Attack
A. Nguyen
Anh Tran
AAML
58
427
0
16 Oct 2020
Computing Systems for Autonomous Driving: State-of-the-Art and
  Challenges
Computing Systems for Autonomous Driving: State-of-the-Art and Challenges
Liangkai Liu
Sidi Lu
Ren Zhong
Baofu Wu
Yongtao Yao
Qingyan Zhang
Weisong Shi
55
269
0
30 Sep 2020
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models
Cassidy Laidlaw
Sahil Singla
Soheil Feizi
AAML
OOD
60
185
0
22 Jun 2020
RAB: Provable Robustness Against Backdoor Attacks
RAB: Provable Robustness Against Backdoor Attacks
Maurice Weber
Xiaojun Xu
Bojan Karlas
Ce Zhang
Yue Liu
AAML
46
161
0
19 Mar 2020
Fast is better than free: Revisiting adversarial training
Fast is better than free: Revisiting adversarial training
Eric Wong
Leslie Rice
J. Zico Kolter
AAML
OOD
122
1,167
0
12 Jan 2020
Efficient Adversarial Training with Transferable Adversarial Examples
Efficient Adversarial Training with Transferable Adversarial Examples
Haizhong Zheng
Ziqi Zhang
Juncheng Gu
Honglak Lee
A. Prakash
AAML
47
108
0
27 Dec 2019
Adversarial Training and Robustness for Multiple Perturbations
Adversarial Training and Robustness for Multiple Perturbations
Florian Tramèr
Dan Boneh
AAML
SILM
62
376
0
30 Apr 2019
A new Backdoor Attack in CNNs by training set corruption without label
  poisoning
A new Backdoor Attack in CNNs by training set corruption without label poisoning
Mauro Barni
Kassem Kallas
B. Tondi
AAML
103
353
0
12 Feb 2019
Theoretically Principled Trade-off between Robustness and Accuracy
Theoretically Principled Trade-off between Robustness and Accuracy
Hongyang R. Zhang
Yaodong Yu
Jiantao Jiao
Eric Xing
L. Ghaoui
Michael I. Jordan
98
2,525
0
24 Jan 2019
Detecting Backdoor Attacks on Deep Neural Networks by Activation
  Clustering
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
Bryant Chen
Wilka Carvalho
Wenjie Li
Heiko Ludwig
Benjamin Edwards
Chengyao Chen
Ziqiang Cao
Biplav Srivastava
AAML
75
786
0
09 Nov 2018
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural
  Networks
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
Kang Liu
Brendan Dolan-Gavitt
S. Garg
AAML
61
1,028
0
30 May 2018
Robustness May Be at Odds with Accuracy
Robustness May Be at Odds with Accuracy
Dimitris Tsipras
Shibani Santurkar
Logan Engstrom
Alexander Turner
Aleksander Madry
AAML
88
1,772
0
30 May 2018
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Ali Shafahi
Wenjie Huang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
80
1,080
0
03 Apr 2018
IoT Security Techniques Based on Machine Learning
IoT Security Techniques Based on Machine Learning
Liang Xiao
Xiaoyue Wan
Xiaozhen Lu
Yanyong Zhang
Di Wu
25
571
0
19 Jan 2018
Spatially Transformed Adversarial Examples
Spatially Transformed Adversarial Examples
Chaowei Xiao
Jun-Yan Zhu
Yue Liu
Warren He
M. Liu
D. Song
AAML
65
520
0
08 Jan 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
D. Song
AAML
SILM
82
1,822
0
15 Dec 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
238
11,962
0
19 Jun 2017
Identity Mappings in Deep Residual Networks
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
305
10,149
0
16 Mar 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.6K
192,638
0
10 Dec 2015
12
Next