ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.10307
  4. Cited By
Backdoor Embedding in Convolutional Neural Network Models via Invisible
  Perturbation

Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation

30 August 2018
C. Liao
Haoti Zhong
Anna Squicciarini
Sencun Zhu
David J. Miller
    SILM
ArXivPDFHTML

Papers citing "Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation"

50 / 81 papers shown
Title
Backdoor Attacks Against Patch-based Mixture of Experts
Backdoor Attacks Against Patch-based Mixture of Experts
Cedric Chan
Jona te Lintelo
S. Picek
AAML
MoE
244
0
0
03 May 2025
Revisiting Backdoor Attacks on Time Series Classification in the Frequency Domain
Revisiting Backdoor Attacks on Time Series Classification in the Frequency Domain
Yuanmin Huang
Mi Zhang
Zhaoxiang Wang
Wenxuan Li
Min Yang
AAML
AI4TS
64
0
0
12 Mar 2025
MADE: Graph Backdoor Defense with Masked Unlearning
MADE: Graph Backdoor Defense with Masked Unlearning
Xiao Lin amd Mingjie Li
Mingjie Li
Yisen Wang
AAML
95
2
0
03 Jan 2025
Mind Your Questions! Towards Backdoor Attacks on Text-to-Visualization
  Models
Mind Your Questions! Towards Backdoor Attacks on Text-to-Visualization Models
Shuaimin Li
Yuanfeng Song
Xuanang Chen
Anni Peng
Zhuoyue Wan
Chen Jason Zhang
Raymond Chi-Wing Wong
SILM
31
0
0
09 Oct 2024
Persistent Backdoor Attacks in Continual Learning
Persistent Backdoor Attacks in Continual Learning
Zhen Guo
Abhinav Kumar
R. Tourani
AAML
37
3
0
20 Sep 2024
2DSig-Detect: a semi-supervised framework for anomaly detection on image data using 2D-signatures
2DSig-Detect: a semi-supervised framework for anomaly detection on image data using 2D-signatures
Xinheng Xie
Kureha Yamaguchi
Margaux Leblanc
Simon Malzard
Varun Chhabra
Victoria Nockles
Yue-bo Wu
AAML
47
0
0
08 Sep 2024
Two Heads are Better than One: Nested PoE for Robust Defense Against
  Multi-Backdoors
Two Heads are Better than One: Nested PoE for Robust Defense Against Multi-Backdoors
Victoria Graf
Qin Liu
Muhao Chen
AAML
40
8
0
02 Apr 2024
Efficient Trigger Word Insertion
Efficient Trigger Word Insertion
Yueqi Zeng
Ziqiang Li
Pengfei Xia
Lei Liu
Bin Li
AAML
23
5
0
23 Nov 2023
A Proxy Attack-Free Strategy for Practically Improving the Poisoning
  Efficiency in Backdoor Attacks
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks
Ziqiang Li
Hong Sun
Pengfei Xia
Beihao Xia
Xue Rui
Wei Zhang
Qinglang Guo
Bin Li
AAML
43
8
0
14 Jun 2023
Evil from Within: Machine Learning Backdoors through Hardware Trojans
Evil from Within: Machine Learning Backdoors through Hardware Trojans
Alexander Warnecke
Julian Speith
Janka Möller
Konrad Rieck
C. Paar
AAML
26
3
0
17 Apr 2023
Poisoning Web-Scale Training Datasets is Practical
Poisoning Web-Scale Training Datasets is Practical
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Andreas Terzis
Kurt Thomas
Florian Tramèr
SILM
33
182
0
20 Feb 2023
Attacks in Adversarial Machine Learning: A Systematic Survey from the
  Life-cycle Perspective
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
49
21
0
19 Feb 2023
SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in
  Image Classification
SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in Image Classification
Gorka Abad
Jing Xu
Stefanos Koffas
Behrad Tajalli
S. Picek
Mauro Conti
AAML
67
5
0
03 Feb 2023
Universal Soldier: Using Universal Adversarial Perturbations for
  Detecting Backdoor Attacks
Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks
Xiaoyun Xu
Oguzhan Ersoy
S. Picek
AAML
34
2
0
01 Feb 2023
Stealthy Backdoor Attack for Code Models
Stealthy Backdoor Attack for Code Models
Zhou Yang
Bowen Xu
Jie M. Zhang
Hong Jin Kang
Jieke Shi
Junda He
David Lo
AAML
26
65
0
06 Jan 2023
Look, Listen, and Attack: Backdoor Attacks Against Video Action
  Recognition
Look, Listen, and Attack: Backdoor Attacks Against Video Action Recognition
Hasan Hammoud
Shuming Liu
Mohammad Alkhrashi
Fahad Albalawi
Guohao Li
AAML
44
8
0
03 Jan 2023
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for
  Federated Learning
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning
Jianyi Zhang
Fangjiao Zhang
Qichao Jin
Zhiqiang Wang
Xiaodong Lin
X. Hei
AAML
FedML
38
1
0
28 Dec 2022
The Perils of Learning From Unlabeled Data: Backdoor Attacks on
  Semi-supervised Learning
The Perils of Learning From Unlabeled Data: Backdoor Attacks on Semi-supervised Learning
Virat Shejwalkar
Lingjuan Lyu
Amir Houmansadr
AAML
32
10
0
01 Nov 2022
Training set cleansing of backdoor poisoning by self-supervised
  representation learning
Training set cleansing of backdoor poisoning by self-supervised representation learning
H. Wang
Soroush Karami
Ousmane Amadou Dia
H. Ritter
E. Emamjomeh-Zadeh
J. Chen
Zhen Xiang
D. J. Miller
G. Kesidis
SSL
35
4
0
19 Oct 2022
Solving the Capsulation Attack against Backdoor-based Deep Neural
  Network Watermarks by Reversing Triggers
Solving the Capsulation Attack against Backdoor-based Deep Neural Network Watermarks by Reversing Triggers
Fangqi Li
Shilin Wang
Yun Zhu
AAML
18
1
0
30 Aug 2022
Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for
  Image Classifier Models
Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for Image Classifier Models
Yulong Wang
Minghui Zhao
Shenghong Li
Xinnan Yuan
W. Ni
18
15
0
19 Aug 2022
Defense against Backdoor Attacks via Identifying and Purifying Bad
  Neurons
Defense against Backdoor Attacks via Identifying and Purifying Bad Neurons
Mingyuan Fan
Yang Liu
Cen Chen
Ximeng Liu
Wenzhong Guo
AAML
21
4
0
13 Aug 2022
FRIB: Low-poisoning Rate Invisible Backdoor Attack based on Feature
  Repair
FRIB: Low-poisoning Rate Invisible Backdoor Attack based on Feature Repair
Hui Xia
Xiugui Yang
X. Qian
Rui Zhang
AAML
32
0
0
26 Jul 2022
Backdoor Attacks on Crowd Counting
Backdoor Attacks on Crowd Counting
Yuhua Sun
Tailai Zhang
Xingjun Ma
Pan Zhou
Jian Lou
Zichuan Xu
Xing Di
Yu Cheng
Lichao
AAML
19
15
0
12 Jul 2022
Transferable Graph Backdoor Attack
Transferable Graph Backdoor Attack
Shuiqiao Yang
Bao Gia Doan
Paul Montague
O. Vel
Tamas Abraham
S. Çamtepe
Damith C. Ranasinghe
S. Kanhere
AAML
49
36
0
21 Jun 2022
Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers
Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers
Nan Luo
Yuan-zhang Li
Yajie Wang
Shan-Hung Wu
Yu-an Tan
Quan-xin Zhang
AAML
20
11
0
10 Jun 2022
A temporal chrominance trigger for clean-label backdoor attack against
  anti-spoof rebroadcast detection
A temporal chrominance trigger for clean-label backdoor attack against anti-spoof rebroadcast detection
Wei Guo
B. Tondi
Mauro Barni
AAML
19
13
0
02 Jun 2022
Hide and Seek: on the Stealthiness of Attacks against Deep Learning
  Systems
Hide and Seek: on the Stealthiness of Attacks against Deep Learning Systems
Zeyan Liu
Fengjun Li
Jingqiang Lin
Zhu Li
Bo Luo
AAML
15
1
0
31 May 2022
WeDef: Weakly Supervised Backdoor Defense for Text Classification
WeDef: Weakly Supervised Backdoor Defense for Text Classification
Lesheng Jin
Zihan Wang
Jingbo Shang
AAML
37
14
0
24 May 2022
Energy-Latency Attacks via Sponge Poisoning
Energy-Latency Attacks via Sponge Poisoning
Antonio Emanuele Cinà
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
SILM
57
29
0
14 Mar 2022
Adversarial Attacks and Defense Methods for Power Quality Recognition
Adversarial Attacks and Defense Methods for Power Quality Recognition
Jiwei Tian
Buhong Wang
Jing Li
Zhen Wang
Mete Ozay
AAML
28
0
0
11 Feb 2022
Imperceptible and Multi-channel Backdoor Attack against Deep Neural
  Networks
Imperceptible and Multi-channel Backdoor Attack against Deep Neural Networks
Mingfu Xue
S. Ni
Ying-Chang Wu
Yushu Zhang
Jian Wang
Weiqiang Liu
AAML
34
13
0
31 Jan 2022
Backdoor Defense with Machine Unlearning
Backdoor Defense with Machine Unlearning
Yang Liu
Mingyuan Fan
Cen Chen
Ximeng Liu
Zhuo Ma
Li Wang
Jianfeng Ma
AAML
32
74
0
24 Jan 2022
Post-Training Detection of Backdoor Attacks for Two-Class and
  Multi-Attack Scenarios
Post-Training Detection of Backdoor Attacks for Two-Class and Multi-Attack Scenarios
Zhen Xiang
David J. Miller
G. Kesidis
AAML
39
47
0
20 Jan 2022
Safe Distillation Box
Safe Distillation Box
Jingwen Ye
Yining Mao
Mingli Song
Xinchao Wang
Cheng Jin
Xiuming Zhang
AAML
24
13
0
05 Dec 2021
A General Framework for Defending Against Backdoor Attacks via Influence
  Graph
A General Framework for Defending Against Backdoor Attacks via Influence Graph
Xiaofei Sun
Jiwei Li
Xiaoya Li
Ziyao Wang
Tianwei Zhang
Han Qiu
Fei Wu
Chun Fan
AAML
TDI
24
5
0
29 Nov 2021
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural
  Networks
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
Xiangyu Qi
Tinghao Xie
Ruizhe Pan
Jifeng Zhu
Yong-Liang Yang
Kai Bu
AAML
35
57
0
25 Nov 2021
Backdoor Pre-trained Models Can Transfer to All
Backdoor Pre-trained Models Can Transfer to All
Lujia Shen
S. Ji
Xuhong Zhang
Jinfeng Li
Jing Chen
Jie Shi
Chengfang Fang
Jianwei Yin
Ting Wang
AAML
SILM
33
120
0
30 Oct 2021
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Dongxian Wu
Yisen Wang
AAML
51
275
0
27 Oct 2021
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Yige Li
X. Lyu
Nodens Koren
Lingjuan Lyu
Bo Li
Xingjun Ma
OnRL
31
322
0
22 Oct 2021
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
99
50
0
13 Oct 2021
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
J. Breier
Xiaolu Hou
Martín Ochoa
Jesus Solano
SILM
AAML
39
8
0
23 Sep 2021
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain
Hasan Hammoud
Guohao Li
AAML
20
13
0
12 Sep 2021
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine
  learning-based genomic analysis
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine learning-based genomic analysis
Esha Sarkar
Michail Maniatakos
34
3
0
14 Aug 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised
  Learning
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILM
SSL
44
152
0
01 Aug 2021
Topological Detection of Trojaned Neural Networks
Topological Detection of Trojaned Neural Networks
Songzhu Zheng
Yikai Zhang
H. Wagner
Mayank Goswami
Chao Chen
AAML
32
40
0
11 Jun 2021
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word
  Substitution
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution
Fanchao Qi
Yuan Yao
Sophia Xu
Zhiyuan Liu
Maosong Sun
SILM
33
126
0
11 Jun 2021
A Master Key Backdoor for Universal Impersonation Attack against
  DNN-based Face Verification
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification
Wei Guo
B. Tondi
Mauro Barni
AAML
30
19
0
01 May 2021
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
J. Hayase
Weihao Kong
Raghav Somani
Sewoong Oh
AAML
29
150
0
22 Apr 2021
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure
  DNN Accelerators
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators
David Stutz
Nandhini Chandramoorthy
Matthias Hein
Bernt Schiele
AAML
MQ
26
18
0
16 Apr 2021
12
Next