ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.03675
  4. Cited By
Dynamic Backdoor Attacks Against Machine Learning Models

Dynamic Backdoor Attacks Against Machine Learning Models

7 March 2020
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
    AAML
ArXivPDFHTML

Papers citing "Dynamic Backdoor Attacks Against Machine Learning Models"

49 / 49 papers shown
Title
A Backdoor Attack Scheme with Invisible Triggers Based on Model Architecture Modification
A Backdoor Attack Scheme with Invisible Triggers Based on Model Architecture Modification
Yuan Ma
Xu Ma
Jiankang Wei
Jinmeng Tang
Xiaoyu Zhang
Yilun Lyu
Kehao Chen
Jingtong Huang
83
0
0
22 Dec 2024
Persistent Backdoor Attacks in Continual Learning
Persistent Backdoor Attacks in Continual Learning
Zhen Guo
Abhinav Kumar
R. Tourani
AAML
27
3
0
20 Sep 2024
Wicked Oddities: Selectively Poisoning for Effective Clean-Label
  Backdoor Attacks
Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks
Quang H. Nguyen
Nguyen Ngoc-Hieu
The-Anh Ta
Thanh Nguyen-Tang
Kok-Seng Wong
Hoang Thanh-Tung
Khoa D. Doan
AAML
33
2
0
15 Jul 2024
DLP: towards active defense against backdoor attacks with decoupled
  learning process
DLP: towards active defense against backdoor attacks with decoupled learning process
Zonghao Ying
Bin Wu
AAML
46
6
0
18 Jun 2024
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Hongbin Liu
Michael K. Reiter
Neil Zhenqiang Gong
AAML
33
2
0
22 Feb 2024
Towards Stealthy Backdoor Attacks against Speech Recognition via
  Elements of Sound
Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of Sound
Hanbo Cai
Pengcheng Zhang
Hai Dong
Yan Xiao
Stefanos Koffas
Yiming Li
AAML
23
28
0
17 Jul 2023
UNICORN: A Unified Backdoor Trigger Inversion Framework
UNICORN: A Unified Backdoor Trigger Inversion Framework
Zhenting Wang
Kai Mei
Juan Zhai
Shiqing Ma
LLMSV
29
44
0
05 Apr 2023
Poisoning Web-Scale Training Datasets is Practical
Poisoning Web-Scale Training Datasets is Practical
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Andreas Terzis
Kurt Thomas
Florian Tramèr
SILM
31
182
0
20 Feb 2023
Attacks in Adversarial Machine Learning: A Systematic Survey from the
  Life-cycle Perspective
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
44
21
0
19 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
33
20
0
14 Feb 2023
Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural
  Networks with Neuromorphic Data
Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data
Gorka Abad
Oguzhan Ersoy
S. Picek
A. Urbieta
AAML
23
17
0
13 Feb 2023
SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in
  Image Classification
SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in Image Classification
Gorka Abad
Jing Xu
Stefanos Koffas
Behrad Tajalli
S. Picek
Mauro Conti
AAML
63
5
0
03 Feb 2023
Backdoor Attacks Against Dataset Distillation
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
34
27
0
03 Jan 2023
Look, Listen, and Attack: Backdoor Attacks Against Video Action
  Recognition
Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition
Hasan Hammoud
Shuming Liu
Mohammad Alkhrashi
Fahad Albalawi
Guohao Li
AAML
32
8
0
03 Jan 2023
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Zeyang Sha
Xinlei He
Pascal Berrang
Mathias Humbert
Yang Zhang
AAML
13
33
0
18 Dec 2022
Backdoor Cleansing with Unlabeled Data
Backdoor Cleansing with Unlabeled Data
Lu Pang
Tao Sun
Haibin Ling
Chao Chen
AAML
37
18
0
22 Nov 2022
Adversarial Cheap Talk
Adversarial Cheap Talk
Chris Xiaoxuan Lu
Timon Willi
Alistair Letcher
Jakob N. Foerster
AAML
18
17
0
20 Nov 2022
Going In Style: Audio Backdoors Through Stylistic Transformations
Going In Style: Audio Backdoors Through Stylistic Transformations
Stefanos Koffas
Luca Pajola
S. Picek
Mauro Conti
23
23
0
06 Nov 2022
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Khoa D. Doan
Yingjie Lao
Ping Li
34
40
0
17 Oct 2022
Understanding Impacts of Task Similarity on Backdoor Attack and
  Detection
Understanding Impacts of Task Similarity on Backdoor Attack and Detection
Di Tang
Rui Zhu
Xiaofeng Wang
Haixu Tang
Yi Chen
AAML
16
5
0
12 Oct 2022
Transferable Graph Backdoor Attack
Transferable Graph Backdoor Attack
Shuiqiao Yang
Bao Gia Doan
Paul Montague
O. Vel
Tamas Abraham
S. Çamtepe
D. Ranasinghe
S. Kanhere
AAML
34
36
0
21 Jun 2022
DECK: Model Hardening for Defending Pervasive Backdoors
DECK: Model Hardening for Defending Pervasive Backdoors
Guanhong Tao
Yingqi Liu
Shuyang Cheng
Shengwei An
Zhuo Zhang
Qiuling Xu
Guangyu Shen
Xiangyu Zhang
AAML
23
7
0
18 Jun 2022
Architectural Backdoors in Neural Networks
Architectural Backdoors in Neural Networks
Mikel Bober-Irizar
Ilia Shumailov
Yiren Zhao
Robert D. Mullins
Nicolas Papernot
AAML
18
23
0
15 Jun 2022
Backdooring Explainable Machine Learning
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
16
5
0
20 Apr 2022
Energy-Latency Attacks via Sponge Poisoning
Energy-Latency Attacks via Sponge Poisoning
Antonio Emanuele Cinà
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
SILM
42
29
0
14 Mar 2022
Dynamic Backdoors with Global Average Pooling
Dynamic Backdoors with Global Average Pooling
Stefanos Koffas
S. Picek
Mauro Conti
AAML
11
8
0
04 Mar 2022
Constrained Optimization with Dynamic Bound-scaling for Effective
  NLPBackdoor Defense
Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Guangyu Shen
Yingqi Liu
Guanhong Tao
Qiuling Xu
Zhuo Zhang
Shengwei An
Shiqing Ma
Xinming Zhang
AAML
13
33
0
11 Feb 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence
  Estimation
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Zayd Hammoudeh
Daniel Lowd
TDI
21
28
0
25 Jan 2022
Backdoor Attack through Frequency Domain
Backdoor Attack through Frequency Domain
Tong Wang
Yuan Yao
Feng Xu
Shengwei An
Hanghang Tong
Ting Wang
AAML
22
33
0
22 Nov 2021
Property Inference Attacks Against GANs
Property Inference Attacks Against GANs
Junhao Zhou
Yufei Chen
Chao Shen
Yang Zhang
AAML
MIACV
30
52
0
15 Nov 2021
Backdoor Pre-trained Models Can Transfer to All
Backdoor Pre-trained Models Can Transfer to All
Lujia Shen
S. Ji
Xuhong Zhang
Jinfeng Li
Jing Chen
Jie Shi
Chengfang Fang
Jianwei Yin
Ting Wang
AAML
SILM
31
117
0
30 Oct 2021
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain
Hasan Hammoud
Guohao Li
AAML
18
13
0
12 Sep 2021
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine
  learning-based genomic analysis
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine learning-based genomic analysis
Esha Sarkar
Michail Maniatakos
26
3
0
14 Aug 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised
  Learning
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILM
SSL
24
151
0
01 Aug 2021
Hidden Backdoors in Human-Centric Language Models
Hidden Backdoors in Human-Centric Language Models
Shaofeng Li
Hui Liu
Tian Dong
Benjamin Zi Hao Zhao
Minhui Xue
Haojin Zhu
Jialiang Lu
SILM
27
143
0
01 May 2021
Manipulating SGD with Data Ordering Attacks
Manipulating SGD with Data Ordering Attacks
Ilia Shumailov
Zakhar Shumaylov
Dmitry Kazhdan
Yiren Zhao
Nicolas Papernot
Murat A. Erdogdu
Ross J. Anderson
AAML
112
90
0
19 Apr 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
24
8
0
16 Mar 2021
Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
Guangyu Shen
Yingqi Liu
Guanhong Tao
Shengwei An
Qiuling Xu
Shuyang Cheng
Shiqing Ma
Xinming Zhang
AAML
28
117
0
09 Feb 2021
DeepPoison: Feature Transfer Based Stealthy Poisoning Attack
DeepPoison: Feature Transfer Based Stealthy Poisoning Attack
Jinyin Chen
Longyuan Zhang
Haibin Zheng
Xueke Wang
Zhaoyan Ming
AAML
27
19
0
06 Jan 2021
Deep Feature Space Trojan Attack of Neural Networks by Controlled
  Detoxification
Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification
Shuyang Cheng
Yingqi Liu
Shiqing Ma
Xinming Zhang
AAML
23
154
0
21 Dec 2020
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural
  Backdoors
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors
Ren Pang
Zheng-Wei Zhang
Xiangshan Gao
Zhaohan Xi
S. Ji
Peng Cheng
Xiapu Luo
Ting Wang
AAML
27
31
0
16 Dec 2020
Input-Aware Dynamic Backdoor Attack
Input-Aware Dynamic Backdoor Attack
A. Nguyen
Anh Tran
AAML
22
419
0
16 Oct 2020
Backdoor Attacks Against Deep Learning Systems in the Physical World
Backdoor Attacks Against Deep Learning Systems in the Physical World
Emily Wenger
Josephine Passananti
A. Bhagoji
Yuanshun Yao
Haitao Zheng
Ben Y. Zhao
AAML
21
199
0
25 Jun 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
  Data Poisoning Attacks
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAML
TDI
13
162
0
22 Jun 2020
Backdoor Attacks to Graph Neural Networks
Backdoor Attacks to Graph Neural Networks
Zaixi Zhang
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
GNN
15
210
0
19 Jun 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
34
298
0
08 May 2020
Stealing Links from Graph Neural Networks
Stealing Links from Graph Neural Networks
Xinlei He
Jinyuan Jia
Michael Backes
Neil Zhenqiang Gong
Yang Zhang
AAML
63
168
0
05 May 2020
Towards Probabilistic Verification of Machine Unlearning
Towards Probabilistic Verification of Machine Unlearning
David M. Sommer
Liwei Song
Sameer Wagh
Prateek Mittal
AAML
11
71
0
09 Mar 2020
Clean-Label Backdoor Attacks on Video Recognition Models
Clean-Label Backdoor Attacks on Video Recognition Models
Shihao Zhao
Xingjun Ma
Xiang Zheng
James Bailey
Jingjing Chen
Yu-Gang Jiang
AAML
196
274
0
06 Mar 2020
1