ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.03735
  4. Cited By
Adversarial Unlearning of Backdoors via Implicit Hypergradient

Adversarial Unlearning of Backdoors via Implicit Hypergradient

7 October 2021
Yi Zeng
Si-An Chen
Won Park
Z. Morley Mao
Ming Jin
R. Jia
    AAML
ArXivPDFHTML

Papers citing "Adversarial Unlearning of Backdoors via Implicit Hypergradient"

35 / 35 papers shown
Title
Secure Transfer Learning: Training Clean Models Against Backdoor in (Both) Pre-trained Encoders and Downstream Datasets
Secure Transfer Learning: Training Clean Models Against Backdoor in (Both) Pre-trained Encoders and Downstream Datasets
Yuhang Zhang
Yuxuan Zhou
Tianyu Li
Minghui Li
Shengshan Hu
Wei Luo
L. Zhang
AAML
SILM
43
0
0
16 Apr 2025
AMUN: Adversarial Machine UNlearning
AMUN: Adversarial Machine UNlearning
A. Boroojeny
Hari Sundaram
Varun Chandrasekaran
MU
AAML
45
0
0
02 Mar 2025
Bad-PFL: Exploring Backdoor Attacks against Personalized Federated Learning
Bad-PFL: Exploring Backdoor Attacks against Personalized Federated Learning
Mingyuan Fan
Zhanyi Hu
Fuyi Wang
Cen Chen
SILM
41
0
0
22 Jan 2025
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense Evaluation
Haiyang Yu
Tian Xie
Jiaping Gui
Pengyang Wang
P. Yi
Yue Wu
53
1
0
17 Nov 2024
AdvBDGen: Adversarially Fortified Prompt-Specific Fuzzy Backdoor Generator Against LLM Alignment
AdvBDGen: Adversarially Fortified Prompt-Specific Fuzzy Backdoor Generator Against LLM Alignment
Pankayaraj Pathmanathan
Udari Madhushani Sehwag
Michael-Andrei Panaitescu-Liess
Furong Huang
SILM
AAML
43
0
0
15 Oct 2024
Uncovering, Explaining, and Mitigating the Superficial Safety of
  Backdoor Defense
Uncovering, Explaining, and Mitigating the Superficial Safety of Backdoor Defense
Rui Min
Zeyu Qin
Nevin L. Zhang
Li Shen
Minhao Cheng
AAML
36
4
0
13 Oct 2024
Infighting in the Dark: Multi-Label Backdoor Attack in Federated Learning
Infighting in the Dark: Multi-Label Backdoor Attack in Federated Learning
Ye Li
Yanchao Zhao
Chengcheng Zhu
Jiale Zhang
AAML
36
0
0
29 Sep 2024
Persistent Backdoor Attacks in Continual Learning
Persistent Backdoor Attacks in Continual Learning
Zhen Guo
Abhinav Kumar
R. Tourani
AAML
29
3
0
20 Sep 2024
NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack
  Through White Gaussian Noise
NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack Through White Gaussian Noise
Abdullah Arafat Miah
Kaan Icer
Resit Sendag
Yu Bi
AAML
DiffM
33
1
0
03 Sep 2024
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks
Zhenxing Niu
Yuyao Sun
Qiguang Miao
Rong Jin
Gang Hua
AAML
38
6
0
28 May 2024
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Ziyao Liu
Huanyi Ye
Chen Chen
Yongsen Zheng
K. Lam
AAML
MU
35
28
0
20 Mar 2024
XGBD: Explanation-Guided Graph Backdoor Detection
XGBD: Explanation-Guided Graph Backdoor Detection
Zihan Guan
Mengnan Du
Ninghao Liu
AAML
29
9
0
08 Aug 2023
Backdoor Learning on Sequence to Sequence Models
Backdoor Learning on Sequence to Sequence Models
Lichang Chen
Minhao Cheng
Heng-Chiao Huang
SILM
54
18
0
03 May 2023
UNICORN: A Unified Backdoor Trigger Inversion Framework
UNICORN: A Unified Backdoor Trigger Inversion Framework
Zhenting Wang
Kai Mei
Juan Zhai
Shiqing Ma
LLMSV
32
44
0
05 Apr 2023
Mask and Restore: Blind Backdoor Defense at Test Time with Masked
  Autoencoder
Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder
Tao Sun
Lu Pang
Chao Chen
Haibin Ling
AAML
43
9
0
27 Mar 2023
Black-box Backdoor Defense via Zero-shot Image Purification
Black-box Backdoor Defense via Zero-shot Image Purification
Yucheng Shi
Mengnan Du
Xuansheng Wu
Zihan Guan
Jin Sun
Ninghao Liu
40
28
0
21 Mar 2023
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive
  Learning
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
Hritik Bansal
Nishad Singhi
Yu Yang
Fan Yin
Aditya Grover
Kai-Wei Chang
AAML
34
42
0
06 Mar 2023
Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks
Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks
Jialai Wang
Ziyuan Zhang
Meiqi Wang
Han Qiu
Tianwei Zhang
Qi Li
Zongpeng Li
Tao Wei
Chao Zhang
AAML
22
20
0
27 Feb 2023
Defending Against Backdoor Attacks by Layer-wise Feature Analysis
Defending Against Backdoor Attacks by Layer-wise Feature Analysis
N. Jebreel
J. Domingo-Ferrer
Yiming Li
AAML
25
10
0
24 Feb 2023
ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep
  Learning Paradigms
ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms
Minzhou Pan
Yi Zeng
Lingjuan Lyu
X. Lin
R. Jia
AAML
29
35
0
22 Feb 2023
Towards Understanding How Self-training Tolerates Data Backdoor
  Poisoning
Towards Understanding How Self-training Tolerates Data Backdoor Poisoning
Soumyadeep Pal
Ren Wang
Yuguang Yao
Sijia Liu
45
6
0
20 Jan 2023
Look, Listen, and Attack: Backdoor Attacks Against Video Action
  Recognition
Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition
Hasan Hammoud
Shuming Liu
Mohammad Alkhrashi
Fahad Albalawi
Guohao Li
AAML
32
8
0
03 Jan 2023
Mind Your Heart: Stealthy Backdoor Attack on Dynamic Deep Neural Network
  in Edge Computing
Mind Your Heart: Stealthy Backdoor Attack on Dynamic Deep Neural Network in Edge Computing
Tian Dong
Ziyuan Zhang
Han Qiu
Tianwei Zhang
Hewu Li
T. Wang
AAML
28
6
0
22 Dec 2022
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of
  Backdoor Effects in Trojaned Machine Learning Models
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models
Rui Zhu
Di Tang
Siyuan Tang
Xiaofeng Wang
Haixu Tang
AAML
FedML
31
13
0
09 Dec 2022
Backdoor Cleansing with Unlabeled Data
Backdoor Cleansing with Unlabeled Data
Lu Pang
Tao Sun
Haibin Ling
Chao Chen
AAML
37
18
0
22 Nov 2022
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning?
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning?
Yi Zeng
Minzhou Pan
Himanshu Jahagirdar
Ming Jin
Lingjuan Lyu
R. Jia
AAML
36
21
0
12 Oct 2022
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning Datasets
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning Datasets
Chen Gong
Zhou Yang
Yunru Bai
Junda He
Jieke Shi
...
Arunesh Sinha
Bowen Xu
Xinwen Hou
David Lo
Guoliang Fan
AAML
OffRL
18
7
0
07 Oct 2022
Augmentation Backdoors
Augmentation Backdoors
J. Rance
Yiren Zhao
Ilia Shumailov
Robert D. Mullins
AAML
SILM
53
7
0
29 Sep 2022
Defense against Backdoor Attacks via Identifying and Purifying Bad
  Neurons
Defense against Backdoor Attacks via Identifying and Purifying Bad Neurons
Mingyuan Fan
Yang Liu
Cen Chen
Ximeng Liu
Wenzhong Guo
AAML
15
4
0
13 Aug 2022
Narcissus: A Practical Clean-Label Backdoor Attack with Limited
  Information
Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information
Yi Zeng
Minzhou Pan
H. Just
Lingjuan Lyu
M. Qiu
R. Jia
AAML
20
168
0
11 Apr 2022
On the Effectiveness of Adversarial Training against Backdoor Attacks
On the Effectiveness of Adversarial Training against Backdoor Attacks
Yinghua Gao
Dongxian Wu
Jingfeng Zhang
Guanhao Gan
Shutao Xia
Gang Niu
Masashi Sugiyama
AAML
32
22
0
22 Feb 2022
Backdoor Defense via Decoupling the Training Process
Backdoor Defense via Decoupling the Training Process
Kunzhe Huang
Yiming Li
Baoyuan Wu
Zhan Qin
Kui Ren
AAML
FedML
27
185
0
05 Feb 2022
Few-Shot Backdoor Attacks on Visual Object Tracking
Few-Shot Backdoor Attacks on Visual Object Tracking
Yiming Li
Haoxiang Zhong
Xingjun Ma
Yong Jiang
Shutao Xia
AAML
38
53
0
31 Jan 2022
Clean-Label Backdoor Attacks on Video Recognition Models
Clean-Label Backdoor Attacks on Video Recognition Models
Shihao Zhao
Xingjun Ma
Xiang Zheng
James Bailey
Jingjing Chen
Yu-Gang Jiang
AAML
196
274
0
06 Mar 2020
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
168
287
0
02 Dec 2018
1