ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.09336
  4. Cited By
Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for
  Image Classifier Models

Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for Image Classifier Models

19 August 2022
Yulong Wang
Minghui Zhao
Shenghong Li
Xinnan Yuan
W. Ni
ArXiv (abs)PDFHTML

Papers citing "Dispersed Pixel Perturbation-based Imperceptible Backdoor Trigger for Image Classifier Models"

28 / 28 papers shown
Title
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger
Yiming Li
Mingyan Zhu
Junfeng Guo
Tao Wei
Shu-Tao Xia
Zhan Qin
AAML
109
1
0
03 Dec 2023
An Overview of Voice Conversion and its Challenges: From Statistical
  Modeling to Deep Learning
An Overview of Voice Conversion and its Challenges: From Statistical Modeling to Deep Learning
Berrak Sisman
Junichi Yamagishi
Simon King
Haizhou Li
BDL
113
326
0
09 Aug 2020
t-viSNE: Interactive Assessment and Interpretation of t-SNE Projections
t-viSNE: Interactive Assessment and Interpretation of t-SNE Projections
Angelos Chatzimparmpas
Rafael M. Martins
Andreas Kerren
58
134
0
17 Feb 2020
DeepONet: Learning nonlinear operators for identifying differential
  equations based on the universal approximation theorem of operators
DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators
Lu Lu
Pengzhan Jin
George Karniadakis
248
2,153
0
08 Oct 2019
Hidden Trigger Backdoor Attacks
Hidden Trigger Backdoor Attacks
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
89
627
0
30 Sep 2019
Detecting Adversarial Samples Using Influence Functions and Nearest
  Neighbors
Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors
Gilad Cohen
Guillermo Sapiro
Raja Giryes
TDI
48
128
0
15 Sep 2019
Februus: Input Purification Defense Against Trojan Attacks on Deep
  Neural Network Systems
Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems
Bao Gia Doan
Ehsan Abbasnejad
Damith C. Ranasinghe
AAML
60
66
0
09 Aug 2019
Why ResNet Works? Residuals Generalize
Why ResNet Works? Residuals Generalize
Fengxiang He
Tongliang Liu
Dacheng Tao
55
251
0
02 Apr 2019
Deep Multi-modal Object Detection and Semantic Segmentation for
  Autonomous Driving: Datasets, Methods, and Challenges
Deep Multi-modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges
Di Feng
Christian Haase-Schuetz
Lars Rosenbaum
Heinz Hertlein
Claudius Gläser
Fabian Duffhauss
W. Wiesbeck
Klaus C. J. Dietmayer
3DPC
122
1,013
0
21 Feb 2019
STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
Yansong Gao
Chang Xu
Derui Wang
Shiping Chen
Damith C. Ranasinghe
Surya Nepal
AAML
77
814
0
18 Feb 2019
Universal Adversarial Training
Universal Adversarial Training
A. Mendrik
Mahyar Najibi
Zheng Xu
John P. Dickerson
L. Davis
Tom Goldstein
AAMLOOD
92
190
0
27 Nov 2018
Detecting Backdoor Attacks on Deep Neural Networks by Activation
  Clustering
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
Bryant Chen
Wilka Carvalho
Wenjie Li
Heiko Ludwig
Benjamin Edwards
Chengyao Chen
Ziqiang Cao
Biplav Srivastava
AAML
92
800
0
09 Nov 2018
Spectral Signatures in Backdoor Attacks
Spectral Signatures in Backdoor Attacks
Brandon Tran
Jerry Li
Aleksander Madry
AAML
91
795
0
01 Nov 2018
Backdoor Embedding in Convolutional Neural Network Models via Invisible
  Perturbation
Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation
C. Liao
Haoti Zhong
Anna Squicciarini
Sencun Zhu
David J. Miller
SILM
87
315
0
30 Aug 2018
Object Detection with Deep Learning: A Review
Object Detection with Deep Learning: A Review
Zhong-Qiu Zhao
Peng Zheng
Shou-tao Xu
Xindong Wu
ObjD
120
4,017
0
15 Jul 2018
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural
  Networks
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
Kang Liu
Brendan Dolan-Gavitt
S. Garg
AAML
69
1,042
0
30 May 2018
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric
Richard Y. Zhang
Phillip Isola
Alexei A. Efros
Eli Shechtman
Oliver Wang
EGVM
384
11,905
0
11 Jan 2018
Generating Adversarial Examples with Adversarial Networks
Generating Adversarial Examples with Adversarial Networks
Chaowei Xiao
Yue Liu
Jun-Yan Zhu
Warren He
M. Liu
Basel Alomair
GANAAML
115
899
0
08 Jan 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
Basel Alomair
AAMLSILM
143
1,853
0
15 Dec 2017
One pixel attack for fooling deep neural networks
One pixel attack for fooling deep neural networks
Jiawei Su
Danilo Vasconcellos Vargas
Kouichi Sakurai
AAML
131
2,326
0
24 Oct 2017
Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation
Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation
Yulei Niu
Zhiwu Lu
Ji-Rong Wen
Tao Xiang
Shih-Fu Chang
VLM
64
68
0
05 Sep 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
130
1,782
0
22 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
317
12,131
0
19 Jun 2017
Ensemble Adversarial Training: Attacks and Defenses
Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr
Alexey Kurakin
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
177
2,729
0
19 May 2017
Feature Squeezing: Detecting Adversarial Examples in Deep Neural
  Networks
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Weilin Xu
David Evans
Yanjun Qi
AAML
87
1,271
0
04 Apr 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
325
20,086
0
07 Oct 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OODAAML
282
8,583
0
16 Aug 2016
DeepFool: a simple and accurate method to fool deep neural networks
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
154
4,905
0
14 Nov 2015
1