ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.03413
  4. Cited By
Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective

Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective

7 April 2021
Yi Zeng
Won Park
Z. Morley Mao
R. Jia
    AAML
ArXivPDFHTML

Papers citing "Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective"

27 / 27 papers shown
Title
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion Models
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion Models
Yuning Han
Bingyin Zhao
Rui Chu
Feng Luo
Biplab Sikdar
Yingjie Lao
DiffM
AAML
146
1
0
16 Dec 2024
PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning
PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning
Sizai Hou
Songze Li
Tayyebeh Jahani-Nezhad
Giuseppe Caire
FedML
93
3
0
12 Jul 2024
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger
Yiming Li
Mingyan Zhu
Junfeng Guo
Tao Wei
Shu-Tao Xia
Zhan Qin
AAML
85
1
0
03 Dec 2023
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks
  using Data Augmentation
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation
Han Qiu
Yi Zeng
Shangwei Guo
Tianwei Zhang
Meikang Qiu
B. Thuraisingham
AAML
62
191
0
13 Dec 2020
Certified Robustness of Nearest Neighbors against Data Poisoning and
  Backdoor Attacks
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks
Jinyuan Jia
Yupei Liu
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
72
74
0
07 Dec 2020
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks
  Without an Accuracy Tradeoff
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
Eitan Borgnia
Valeriia Cherepanova
Liam H. Fowl
Amin Ghiasi
Jonas Geiping
Micah Goldblum
Tom Goldstein
Arjun Gupta
AAML
52
127
0
18 Nov 2020
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks
Jinyuan Jia
Xiaoyu Cao
Neil Zhenqiang Gong
SILM
48
130
0
11 Aug 2020
Deep Partition Aggregation: Provable Defense against General Poisoning
  Attacks
Deep Partition Aggregation: Provable Defense against General Poisoning Attacks
Alexander Levine
Soheil Feizi
AAML
54
145
0
26 Jun 2020
Backdoor Attacks Against Deep Learning Systems in the Physical World
Backdoor Attacks Against Deep Learning Systems in the Physical World
Emily Wenger
Josephine Passananti
A. Bhagoji
Yuanshun Yao
Haitao Zheng
Ben Y. Zhao
AAML
50
200
0
25 Jun 2020
FaceHack: Triggering backdoored facial recognition systems using facial
  characteristics
FaceHack: Triggering backdoored facial recognition systems using facial characteristics
Esha Sarkar
Hadjer Benkraouda
Michail Maniatakos
AAML
54
39
0
20 Jun 2020
Leveraging Frequency Analysis for Deep Fake Image Recognition
Leveraging Frequency Analysis for Deep Fake Image Recognition
Joel Frank
Thorsten Eisenhofer
Lea Schonherr
Asja Fischer
D. Kolossa
Thorsten Holz
70
552
0
19 Mar 2020
Robust Anomaly Detection and Backdoor Attack Detection Via Differential
  Privacy
Robust Anomaly Detection and Backdoor Attack Detection Via Differential Privacy
Min Du
R. Jia
D. Song
AAML
72
176
0
16 Nov 2019
SmoothFool: An Efficient Framework for Computing Smooth Adversarial
  Perturbations
SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations
Ali Dabouei
Sobhan Soleymani
Fariborz Taherkhani
J. Dawson
Nasser M. Nasrabadi
AAML
128
19
0
08 Oct 2019
Detecting AI Trojans Using Meta Neural Analysis
Detecting AI Trojans Using Meta Neural Analysis
Xiaojun Xu
Qi Wang
Huichen Li
Nikita Borisov
Carl A. Gunter
Yue Liu
81
323
0
08 Oct 2019
Hidden Trigger Backdoor Attacks
Hidden Trigger Backdoor Attacks
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
81
623
0
30 Sep 2019
Invisible Backdoor Attacks on Deep Neural Networks via Steganography and
  Regularization
Invisible Backdoor Attacks on Deep Neural Networks via Steganography and Regularization
Shaofeng Li
Minhui Xue
Benjamin Zi Hao Zhao
Haojin Zhu
Dali Kaafar
48
60
0
06 Sep 2019
TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan
  Backdoors in AI Systems
TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems
Wenbo Guo
Lun Wang
Masashi Sugiyama
Min Du
D. Song
68
229
0
02 Aug 2019
STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
Yansong Gao
Chang Xu
Derui Wang
Shiping Chen
Damith C. Ranasinghe
Surya Nepal
AAML
75
809
0
18 Feb 2019
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
208
292
0
02 Dec 2018
Detecting Backdoor Attacks on Deep Neural Networks by Activation
  Clustering
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
Bryant Chen
Wilka Carvalho
Wenjie Li
Heiko Ludwig
Benjamin Edwards
Chengyao Chen
Ziqiang Cao
Biplav Srivastava
AAML
82
795
0
09 Nov 2018
Spectral Signatures in Backdoor Attacks
Spectral Signatures in Backdoor Attacks
Brandon Tran
Jerry Li
Aleksander Madry
AAML
88
788
0
01 Nov 2018
Detection of Adversarial Training Examples in Poisoning Attacks through
  Anomaly Detection
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection
Andrea Paudice
Luis Muñoz-González
András Gyorgy
Emil C. Lupu
AAML
58
146
0
08 Feb 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
D. Song
AAML
SILM
120
1,837
0
15 Dec 2017
Neural Trojans
Neural Trojans
Yuntao Liu
Yang Xie
Ankur Srivastava
AAML
49
353
0
03 Oct 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
111
1,772
0
22 Aug 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
192
2,882
0
14 Mar 2017
Universal adversarial perturbations
Universal adversarial perturbations
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
133
2,527
0
26 Oct 2016
1