ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.08758
  4. Cited By
Low Frequency Adversarial Perturbation
v1v2 (latest)

Low Frequency Adversarial Perturbation

24 September 2018
Chuan Guo
Jared S. Frank
Kilian Q. Weinberger
    AAML
ArXiv (abs)PDFHTML

Papers citing "Low Frequency Adversarial Perturbation"

42 / 92 papers shown
Title
Calibrated Adversarial Training
Calibrated Adversarial Training
Tianjin Huang
Vlado Menkovski
Yulong Pei
Mykola Pechenizkiy
AAML
117
3
0
01 Oct 2021
AdvDrop: Adversarial Attack to DNNs by Dropping Information
AdvDrop: Adversarial Attack to DNNs by Dropping Information
Ranjie Duan
YueFeng Chen
Dantong Niu
Yun Yang
•. A. K. Qin
Yuan He
AAML
82
92
0
20 Aug 2021
Amplitude-Phase Recombination: Rethinking Robustness of Convolutional
  Neural Networks in Frequency Domain
Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain
Guangyao Chen
Peixi Peng
Li Ma
Jia Li
Lin Du
Yonghong Tian
AAMLOOD
65
97
0
19 Aug 2021
Simple black-box universal adversarial attacks on medical image
  classification based on deep neural networks
Simple black-box universal adversarial attacks on medical image classification based on deep neural networks
K. Koga
Kazuhiro Takemoto
AAML
60
12
0
11 Aug 2021
Adversarial Attacks with Time-Scale Representations
Adversarial Attacks with Time-Scale Representations
Alberto Santamaria-Pang
Jia-dong Qiu
Aritra Chowdhury
James R. Kubricht
Peter Tu
Iyer Naresh
Nurali Virani
AAMLMLAU
50
0
0
26 Jul 2021
High-Robustness, Low-Transferability Fingerprinting of Neural Networks
High-Robustness, Low-Transferability Fingerprinting of Neural Networks
Siyue Wang
Xiao Wang
Pin-Yu Chen
Pu Zhao
Xue Lin
AAML
69
2
0
14 May 2021
Deep Image Destruction: Vulnerability of Deep Image-to-Image Models
  against Adversarial Attacks
Deep Image Destruction: Vulnerability of Deep Image-to-Image Models against Adversarial Attacks
Jun-Ho Choi
Huan Zhang
Jun-Hyuk Kim
Cho-Jui Hsieh
Jong-Seok Lee
VLM
62
8
0
30 Apr 2021
Bridging the Gap Between Adversarial Robustness and Optimization Bias
Bridging the Gap Between Adversarial Robustness and Optimization Bias
Fartash Faghri
Sven Gowal
C. N. Vasconcelos
David J. Fleet
Fabian Pedregosa
Nicolas Le Roux
AAML
234
7
0
17 Feb 2021
Generating Structured Adversarial Attacks Using Frank-Wolfe Method
Generating Structured Adversarial Attacks Using Frank-Wolfe Method
Ehsan Kazemi
Thomas Kerdreux
Liquang Wang
AAMLDiffM
48
1
0
15 Feb 2021
Universal Adversarial Perturbations Through the Lens of Deep
  Steganography: Towards A Fourier Perspective
Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards A Fourier Perspective
Chaoning Zhang
Philipp Benz
Adil Karjauv
In So Kweon
AAML
94
42
0
12 Feb 2021
Meta Adversarial Training against Universal Patches
Meta Adversarial Training against Universal Patches
J. H. Metzen
Nicole Finnie
Robin Hutmacher
OODAAML
112
21
0
27 Jan 2021
Generating Black-Box Adversarial Examples in Sparse Domain
Generating Black-Box Adversarial Examples in Sparse Domain
Ieee Hadi Zanddizari Student Member
Behnam Zeinali
Jerome Chang
AAML
55
7
0
22 Jan 2021
On the Effectiveness of Small Input Noise for Defending Against
  Query-based Black-Box Attacks
On the Effectiveness of Small Input Noise for Defending Against Query-based Black-Box Attacks
Junyoung Byun
Hyojun Go
Changick Kim
AAML
193
21
0
13 Jan 2021
Color Channel Perturbation Attacks for Fooling Convolutional Neural
  Networks and A Defense Against Such Attacks
Color Channel Perturbation Attacks for Fooling Convolutional Neural Networks and A Defense Against Such Attacks
Jayendra Kantipudi
S. Dubey
Soumendu Chakraborty
AAML
91
22
0
20 Dec 2020
An Empirical Study of Derivative-Free-Optimization Algorithms for
  Targeted Black-Box Attacks in Deep Neural Networks
An Empirical Study of Derivative-Free-Optimization Algorithms for Targeted Black-Box Attacks in Deep Neural Networks
Giuseppe Ughi
V. Abrol
Jared Tanner
AAML
65
13
0
03 Dec 2020
Towards Imperceptible Universal Attacks on Texture Recognition
Towards Imperceptible Universal Attacks on Texture Recognition
Yingpeng Deng
Lina Karam
AAML
41
1
0
24 Nov 2020
Adversarial Eigen Attack on Black-Box Models
Adversarial Eigen Attack on Black-Box Models
Linjun Zhou
Peng Cui
Yinan Jiang
Shiqiang Yang
AAML
40
14
0
27 Aug 2020
AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing
  Flows
AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows
H. M. Dolatabadi
S. Erfani
C. Leckie
AAML
125
66
0
15 Jul 2020
Simple and Efficient Hard Label Black-box Adversarial Attacks in Low
  Query Budget Regimes
Simple and Efficient Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes
Satya Narayan Shukla
Anit Kumar Sahu
Devin Willmott
J. Zico Kolter
AAML
59
34
0
13 Jul 2020
Trace-Norm Adversarial Examples
Trace-Norm Adversarial Examples
Ehsan Kazemi
Thomas Kerdreux
Liqiang Wang
59
2
0
02 Jul 2020
QEBA: Query-Efficient Boundary-Based Blackbox Attack
QEBA: Query-Efficient Boundary-Based Blackbox Attack
Huichen Li
Xiaojun Xu
Xiaolu Zhang
Shuang Yang
Yue Liu
AAML
135
183
0
28 May 2020
Projection & Probability-Driven Black-Box Attack
Projection & Probability-Driven Black-Box Attack
Jie Li
Rongrong Ji
Hong Liu
Jianzhuang Liu
Bineng Zhong
Cheng Deng
Q. Tian
AAML
72
49
0
08 May 2020
Towards Frequency-Based Explanation for Robust CNN
Towards Frequency-Based Explanation for Robust CNN
Zifan Wang
Yilin Yang
Ankit Shrivastava
Varun Rawal
Zihao Ding
AAMLFAtt
57
49
0
06 May 2020
Adversarial Attacks on Monocular Depth Estimation
Adversarial Attacks on Monocular Depth Estimation
Ziqi Zhang
Xinge Zhu
Yingwei Li
Xiangqun Chen
Yao Guo
AAMLMDE
83
26
0
23 Mar 2020
Frequency-Tuned Universal Adversarial Attacks
Frequency-Tuned Universal Adversarial Attacks
Yingpeng Deng
Lina Karam
AAML
51
7
0
11 Mar 2020
A Model-Based Derivative-Free Approach to Black-Box Adversarial
  Examples: BOBYQA
A Model-Based Derivative-Free Approach to Black-Box Adversarial Examples: BOBYQA
Giuseppe Ughi
V. Abrol
Jared Tanner
AAML
41
3
0
24 Feb 2020
Mitigating large adversarial perturbations on X-MAS (X minus Moving
  Averaged Samples)
Mitigating large adversarial perturbations on X-MAS (X minus Moving Averaged Samples)
Woohyung Chun
Sung-Min Hong
Junho Huh
Inyup Kang
AAML
26
0
0
19 Dec 2019
A Survey of Black-Box Adversarial Attacks on Computer Vision Models
A Survey of Black-Box Adversarial Attacks on Computer Vision Models
Siddhant Bhambri
Sumanyu Muku
Avinash Tulasi
Arun Balaji Buduru
AAMLVLM
69
79
0
03 Dec 2019
Square Attack: a query-efficient black-box adversarial attack via random
  search
Square Attack: a query-efficient black-box adversarial attack via random search
Maksym Andriushchenko
Francesco Croce
Nicolas Flammarion
Matthias Hein
AAML
156
997
0
29 Nov 2019
Towards Security Threats of Deep Learning Systems: A Survey
Towards Security Threats of Deep Learning Systems: A Survey
Yingzhe He
Guozhu Meng
Kai Chen
Xingbo Hu
Jinwen He
AAMLELM
56
14
0
28 Nov 2019
The Threat of Adversarial Attacks on Machine Learning in Network
  Security -- A Survey
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
95
70
0
06 Nov 2019
Natural Language Adversarial Defense through Synonym Encoding
Natural Language Adversarial Defense through Synonym Encoding
Xiaosen Wang
Hao Jin
Yichen Yang
Kun He
AAML
91
64
0
15 Sep 2019
Improving Black-box Adversarial Attacks with a Transfer-based Prior
Improving Black-box Adversarial Attacks with a Transfer-based Prior
Shuyu Cheng
Yinpeng Dong
Tianyu Pang
Hang Su
Jun Zhu
AAML
94
274
0
17 Jun 2019
Copy and Paste: A Simple But Effective Initialization Method for
  Black-Box Adversarial Attacks
Copy and Paste: A Simple But Effective Initialization Method for Black-Box Adversarial Attacks
T. Brunner
Frederik Diehl
Alois Knoll
AAML
44
8
0
14 Jun 2019
High Frequency Component Helps Explain the Generalization of
  Convolutional Neural Networks
High Frequency Component Helps Explain the Generalization of Convolutional Neural Networks
Haohan Wang
Xindi Wu
Pengcheng Yin
Eric Xing
83
526
0
28 May 2019
Simple Black-box Adversarial Attacks
Simple Black-box Adversarial Attacks
Chuan Guo
Jacob R. Gardner
Yurong You
A. Wilson
Kilian Q. Weinberger
AAML
78
581
0
17 May 2019
Regional Homogeneity: Towards Learning Transferable Universal
  Adversarial Perturbations Against Defenses
Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses
Yingwei Li
S. Bai
Cihang Xie
Zhenyu A. Liao
Xiaohui Shen
Alan Yuille
AAML
150
51
0
01 Apr 2019
On the Effectiveness of Low Frequency Perturbations
On the Effectiveness of Low Frequency Perturbations
Yash Sharma
G. Ding
Marcus A. Brubaker
AAML
92
126
0
28 Feb 2019
Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial
  Attacks
Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks
T. Brunner
Frederik Diehl
Michael Truong-Le
Alois Knoll
MLAUAAML
77
117
0
24 Dec 2018
Learning Transferable Adversarial Examples via Ghost Networks
Learning Transferable Adversarial Examples via Ghost Networks
Yingwei Li
S. Bai
Yuyin Zhou
Cihang Xie
Zhishuai Zhang
Alan Yuille
AAML
132
137
0
09 Dec 2018
MMA Training: Direct Input Space Margin Maximization through Adversarial
  Training
MMA Training: Direct Input Space Margin Maximization through Adversarial Training
G. Ding
Yash Sharma
Kry Yik-Chau Lui
Ruitong Huang
AAML
112
274
0
06 Dec 2018
Adversarial Vision Challenge
Adversarial Vision Challenge
Wieland Brendel
Jonas Rauber
Alexey Kurakin
Nicolas Papernot
Behar Veliqi
M. Salathé
Sharada Mohanty
Matthias Bethge
AAML
79
58
0
06 Aug 2018
Previous
12