ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.03369
  4. Cited By
Februus: Input Purification Defense Against Trojan Attacks on Deep
  Neural Network Systems

Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems

9 August 2019
Bao Gia Doan
Ehsan Abbasnejad
D. Ranasinghe
    AAML
ArXivPDFHTML

Papers citing "Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems"

14 / 14 papers shown
Title
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via
  Restricted Adversarial Distillation
DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via Restricted Adversarial Distillation
Zhicong Yan
Shenghong Li
Ruijie Zhao
Yuan Tian
Yuanyuan Zhao
AAML
36
11
0
13 Jun 2023
Defending Against Patch-based Backdoor Attacks on Self-Supervised
  Learning
Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Ajinkya Tejankar
Maziar Sanjabi
Qifan Wang
Sinong Wang
Hamed Firooz
Hamed Pirsiavash
L Tan
AAML
30
19
0
04 Apr 2023
Mask and Restore: Blind Backdoor Defense at Test Time with Masked
  Autoencoder
Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder
Tao Sun
Lu Pang
Chao Chen
Haibin Ling
AAML
43
9
0
27 Mar 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
33
20
0
14 Feb 2023
Salient Conditional Diffusion for Defending Against Backdoor Attacks
Salient Conditional Diffusion for Defending Against Backdoor Attacks
Brandon B. May
N. Joseph Tatro
Dylan Walker
Piyush Kumar
N. Shnidman
DiffM
24
7
0
31 Jan 2023
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for
  Federated Learning
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning
Jianyi Zhang
Fangjiao Zhang
Qichao Jin
Zhiqiang Wang
Xiaodong Lin
X. Hei
AAML
FedML
32
0
0
28 Dec 2022
Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for
  Image Classifier Models
Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for Image Classifier Models
Yulong Wang
Minghui Zhao
Shenghong Li
Xinnan Yuan
W. Ni
18
15
0
19 Aug 2022
Backdooring Explainable Machine Learning
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
16
5
0
20 Apr 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence
  Estimation
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Zayd Hammoudeh
Daniel Lowd
TDI
21
28
0
25 Jan 2022
Input-Aware Dynamic Backdoor Attack
Input-Aware Dynamic Backdoor Attack
A. Nguyen
Anh Tran
AAML
24
419
0
16 Oct 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI)
  Interpretability through Neural Backdoors
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
29
93
0
22 Sep 2020
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Yunfei Liu
Xingjun Ma
James Bailey
Feng Lu
AAML
22
501
0
05 Jul 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
46
298
0
08 May 2020
Dynamic Backdoor Attacks Against Machine Learning Models
Dynamic Backdoor Attacks Against Machine Learning Models
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
AAML
24
269
0
07 Mar 2020
1