ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.03369
  4. Cited By
Februus: Input Purification Defense Against Trojan Attacks on Deep
  Neural Network Systems
v1v2v3v4v5v6v7 (latest)

Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems

9 August 2019
Bao Gia Doan
Ehsan Abbasnejad
Damith C. Ranasinghe
    AAML
ArXiv (abs)PDFHTML

Papers citing "Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems"

41 / 41 papers shown
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via
  Restricted Adversarial Distillation
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via Restricted Adversarial DistillationACM Asia Conference on Computer and Communications Security (AsiaCCS), 2023
Zhicong Yan
Shenghong Li
Ruijie Zhao
Yuan Tian
Yuanyuan Zhao
AAML
207
15
0
13 Jun 2023
UMD: Unsupervised Model Detection for X2X Backdoor Attacks
UMD: Unsupervised Model Detection for X2X Backdoor AttacksInternational Conference on Machine Learning (ICML), 2023
Zhen Xiang
Zidi Xiong
Yue Liu
AAML
496
28
0
29 May 2023
Pick your Poison: Undetectability versus Robustness in Data Poisoning
  Attacks
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks
Nils Lukas
Florian Kerschbaum
340
1
0
07 May 2023
Defending Against Patch-based Backdoor Attacks on Self-Supervised
  Learning
Defending Against Patch-based Backdoor Attacks on Self-Supervised LearningComputer Vision and Pattern Recognition (CVPR), 2023
Ajinkya Tejankar
Maziar Sanjabi
Qifan Wang
Sinong Wang
Hamed Firooz
Hamed Pirsiavash
L Tan
AAML
247
35
0
04 Apr 2023
Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder
Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder
Tao Sun
Lu Pang
Chao Chen
Haibin Ling
Haibin Ling
AAML
421
0
0
27 Mar 2023
Influencer Backdoor Attack on Semantic Segmentation
Influencer Backdoor Attack on Semantic SegmentationInternational Conference on Learning Representations (ICLR), 2023
Haoheng Lan
Jindong Gu
Juil Sock
Hengshuang Zhao
AAML
410
10
0
21 Mar 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILMAAML
254
21
0
14 Feb 2023
Salient Conditional Diffusion for Defending Against Backdoor Attacks
Salient Conditional Diffusion for Defending Against Backdoor Attacks
Brandon B. May
N. Joseph Tatro
Dylan Walker
Piyush Kumar
N. Shnidman
DiffM
275
11
0
31 Jan 2023
Threats, Vulnerabilities, and Controls of Machine Learning Based
  Systems: A Survey and Taxonomy
Threats, Vulnerabilities, and Controls of Machine Learning Based Systems: A Survey and Taxonomy
Yusuke Kawamoto
Kazumasa Miyake
K. Konishi
Y. Oiwa
290
5
0
18 Jan 2023
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for
  Federated Learning
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning
Jianyi Zhang
Fangjiao Zhang
Qichao Jin
Zhiqiang Wang
Xiaodong Lin
X. Hei
AAMLFedML
250
2
0
28 Dec 2022
Simultaneously Optimizing Perturbations and Positions for Black-box
  Adversarial Patch Attacks
Simultaneously Optimizing Perturbations and Positions for Black-box Adversarial Patch AttacksIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022
Xingxing Wei
Yingjie Guo
Jie Yu
Bo Zhang
AAML
273
71
0
26 Dec 2022
Rethinking the Reverse-engineering of Trojan Triggers
Rethinking the Reverse-engineering of Trojan TriggersNeural Information Processing Systems (NeurIPS), 2022
Zhenting Wang
Kai Mei
Hailun Ding
Juan Zhai
Shiqing Ma
266
57
0
27 Oct 2022
Backdoor Attack and Defense in Federated Generative Adversarial
  Network-based Medical Image Synthesis
Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis
Ruinan Jin
Xiaoxiao Li
FedMLAAMLMedIm
434
38
0
19 Oct 2022
Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for
  Image Classifier Models
Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for Image Classifier ModelsIEEE Transactions on Information Forensics and Security (IEEE TIFS), 2022
Yulong Wang
Minghui Zhao
Shenghong Li
Xinnan Yuan
W. Ni
214
22
0
19 Aug 2022
Data-free Backdoor Removal based on Channel Lipschitzness
Data-free Backdoor Removal based on Channel LipschitznessEuropean Conference on Computer Vision (ECCV), 2022
Runkai Zheng
Rong Tang
Jianze Li
Li Liu
AAML
385
137
0
05 Aug 2022
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain
Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain
Chang Yue
Peizhuo Lv
Ruigang Liang
Kai Chen
AAML
234
13
0
09 Jul 2022
Contributor-Aware Defenses Against Adversarial Backdoor Attacks
Contributor-Aware Defenses Against Adversarial Backdoor Attacks
Glenn Dawson
Muhammad Umer
R. Polikar
AAML
147
0
0
28 May 2022
BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural
  Networks via Image Quantization and Contrastive Adversarial Learning
BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial LearningComputer Vision and Pattern Recognition (CVPR), 2022
Zhenting Wang
Juan Zhai
Shiqing Ma
AAML
317
151
0
26 May 2022
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for FreeComputer Vision and Pattern Recognition (CVPR), 2022
Tianlong Chen
Zhenyu Zhang
Yihua Zhang
Shiyu Chang
Sijia Liu
Zinan Lin
AAML
330
29
0
24 May 2022
Wild Patterns Reloaded: A Survey of Machine Learning Security against
  Training Data Poisoning
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data PoisoningACM Computing Surveys (ACM CSUR), 2022
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Sebastiano Vascon
Werner Zellinger
Bernhard A. Moser
Alina Oprea
Battista Biggio
Marcello Pelillo
Fabio Roli
AAML
479
188
0
04 May 2022
Backdooring Explainable Machine Learning
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
230
5
0
20 Apr 2022
Towards Effective and Robust Neural Trojan Defenses via Input Filtering
Towards Effective and Robust Neural Trojan Defenses via Input FilteringEuropean Conference on Computer Vision (ECCV), 2022
Kien Do
Haripriya Harikumar
Hung Le
D. Nguyen
T. Tran
Santu Rana
Dang Nguyen
Willy Susilo
Svetha Venkatesh
AAML
322
13
0
24 Feb 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence
  Estimation
Identifying a Training-Set Attack's Target Using Renormalized Influence EstimationConference on Computer and Communications Security (CCS), 2022
Zayd Hammoudeh
Daniel Lowd
TDI
352
41
0
25 Jan 2022
Watermarking Graph Neural Networks based on Backdoor Attacks
Watermarking Graph Neural Networks based on Backdoor AttacksEuropean Symposium on Security and Privacy (EuroS&P), 2021
Jing Xu
Stefanos Koffas
Oguzhan Ersoy
S. Picek
AAML
415
43
0
21 Oct 2021
Trojan Signatures in DNN Weights
Trojan Signatures in DNN Weights
Gregg Fields
Mohammad Samragh
Mojan Javaheripi
F. Koushanfar
T. Javidi
AAML
137
28
0
07 Sep 2021
Jujutsu: A Two-stage Defense against Adversarial Patch Attacks on Deep
  Neural Networks
Jujutsu: A Two-stage Defense against Adversarial Patch Attacks on Deep Neural NetworksACM Asia Conference on Computer and Communications Security (AsiaCCS), 2021
Zitao Chen
Pritam Dash
Karthik Pattabiraman
AAML
466
31
0
11 Aug 2021
Poison Ink: Robust and Invisible Backdoor Attack
Poison Ink: Robust and Invisible Backdoor AttackIEEE Transactions on Image Processing (TIP), 2021
Jie Zhang
Dongdong Chen
Qidong Huang
Jing Liao
Weiming Zhang
Huamin Feng
G. Hua
Nenghai Yu
AAML
412
118
0
05 Aug 2021
FeSHI: Feature Map Based Stealthy Hardware Intrinsic Attack
FeSHI: Feature Map Based Stealthy Hardware Intrinsic AttackIEEE Access (IEEE Access), 2021
Tolulope A. Odetola
Faiq Khalid
Travis Sandefur
Hawzhin Mohammed
S. R. Hasan
274
11
0
13 Jun 2021
SGBA: A Stealthy Scapegoat Backdoor Attack against Deep Neural Networks
SGBA: A Stealthy Scapegoat Backdoor Attack against Deep Neural NetworksComputers & security (CS), 2021
Yingzhe He
Zhili Shen
Chang Xia
Jingyu Hua
Wei Tong
Sheng Zhong
AAML
335
8
0
02 Apr 2021
MISA: Online Defense of Trojaned Models using Misattributions
MISA: Online Defense of Trojaned Models using MisattributionsAsia-Pacific Computer Systems Architecture Conference (ACSA), 2021
Panagiota Kiourti
Wenchao Li
Anirban Roy
Karan Sikka
Susmit Jha
340
10
0
29 Mar 2021
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor
  Attacks for Data Collection Scenarios
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios
Hassan Ali
Surya Nepal
S. Kanhere
S. Jha
AAML
185
13
0
14 Dec 2020
Input-Aware Dynamic Backdoor Attack
Input-Aware Dynamic Backdoor Attack
A. Nguyen
Anh Tran
AAML
480
532
0
16 Oct 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI)
  Interpretability through Neural Backdoors
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural BackdoorsKnowledge Discovery and Data Mining (KDD), 2020
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
249
106
0
22 Sep 2020
CLEANN: Accelerated Trojan Shield for Embedded Neural Networks
CLEANN: Accelerated Trojan Shield for Embedded Neural Networks
Mojan Javaheripi
Mohammad Samragh
Gregory Fields
T. Javidi
F. Koushanfar
AAMLFedML
185
48
0
04 Sep 2020
Trojaning Language Models for Fun and Profit
Trojaning Language Models for Fun and ProfitEuropean Symposium on Security and Privacy (EuroS&P), 2020
Xinyang Zhang
Zheng Zhang
Shouling Ji
Ting Wang
SILMAAML
451
161
0
01 Aug 2020
Backdoor Learning: A Survey
Backdoor Learning: A SurveyIEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2020
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
733
784
0
17 Jul 2020
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Yunfei Liu
Jiabo He
James Bailey
Feng Lu
AAML
398
593
0
05 Jul 2020
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
R. Schuster
Congzheng Song
Eran Tromer
Vitaly Shmatikov
SILMAAML
555
197
0
05 Jul 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAMLFedMLSILM
636
369
0
08 May 2020
Dynamic Backdoor Attacks Against Machine Learning Models
Dynamic Backdoor Attacks Against Machine Learning ModelsEuropean Symposium on Security and Privacy (EuroS&P), 2020
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
AAML
393
321
0
07 Mar 2020
Design and Evaluation of a Multi-Domain Trojan Detection Method on Deep
  Neural Networks
Design and Evaluation of a Multi-Domain Trojan Detection Method on Deep Neural NetworksIEEE Transactions on Dependable and Secure Computing (TDSC), 2019
Yansong Gao
Yeonjae Kim
Bao Gia Doan
Zhi-Li Zhang
Gongxuan Zhang
Surya Nepal
Damith C. Ranasinghe
Hyoungshick Kim
AAML
252
116
0
23 Nov 2019
1
Page 1 of 1