ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.00117
  4. Cited By
Countering Adversarial Images using Input Transformations

Countering Adversarial Images using Input Transformations

31 October 2017
Chuan Guo
Mayank Rana
Moustapha Cissé
Laurens van der Maaten
    AAML
ArXivPDFHTML

Papers citing "Countering Adversarial Images using Input Transformations"

50 / 316 papers shown
Title
Sparta: Spatially Attentive and Adversarially Robust Activation
Sparta: Spatially Attentive and Adversarially Robust Activation
Qing Guo
Felix Juefei Xu
Changqing Zhou
Wei Feng
Yang Liu
Song Wang
AAML
38
4
0
18 May 2021
Staircase Sign Method for Boosting Adversarial Attacks
Staircase Sign Method for Boosting Adversarial Attacks
Qilong Zhang
Xiaosu Zhu
Jingkuan Song
Lianli Gao
Heng Tao Shen
AAML
43
13
0
20 Apr 2021
Removing Adversarial Noise in Class Activation Feature Space
Removing Adversarial Noise in Class Activation Feature Space
Dawei Zhou
N. Wang
Chunlei Peng
Xinbo Gao
Xiaoyu Wang
Jun Yu
Tongliang Liu
AAML
30
28
0
19 Apr 2021
Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities
  in Machine Learning Systems
Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities in Machine Learning Systems
Yue Gao
Ilia Shumailov
Kassem Fawaz
AAML
32
10
0
18 Apr 2021
Fashion-Guided Adversarial Attack on Person Segmentation
Fashion-Guided Adversarial Attack on Person Segmentation
Marc Treu
Trung-Nghia Le
H. Nguyen
Junichi Yamagishi
Isao Echizen
AAML
33
12
0
17 Apr 2021
Mitigating Adversarial Attack for Compute-in-Memory Accelerator
  Utilizing On-chip Finetune
Mitigating Adversarial Attack for Compute-in-Memory Accelerator Utilizing On-chip Finetune
Shanshi Huang
Hongwu Jiang
Shimeng Yu
AAML
26
3
0
13 Apr 2021
Adaptive Clustering of Robust Semantic Representations for Adversarial
  Image Purification
Adaptive Clustering of Robust Semantic Representations for Adversarial Image Purification
S. Silva
Arun Das
I. Scarff
Peyman Najafirad
AAML
20
1
0
05 Apr 2021
Can audio-visual integration strengthen robustness under multimodal
  attacks?
Can audio-visual integration strengthen robustness under multimodal attacks?
Yapeng Tian
Chenliang Xu
AAML
36
37
0
05 Apr 2021
Enhancing the Transferability of Adversarial Attacks through Variance
  Tuning
Enhancing the Transferability of Adversarial Attacks through Variance Tuning
Xiaosen Wang
Kun He
AAML
45
380
0
29 Mar 2021
Adversarial Attacks are Reversible with Natural Supervision
Adversarial Attacks are Reversible with Natural Supervision
Chengzhi Mao
Mia Chiquer
Hao Wang
Junfeng Yang
Carl Vondrick
BDL
AAML
21
55
0
26 Mar 2021
Learning Defense Transformers for Counterattacking Adversarial Examples
Learning Defense Transformers for Counterattacking Adversarial Examples
Jincheng Li
Jingyun Liang
Yifan Zhang
Jian Chen
Mingkui Tan
AAML
39
2
0
13 Mar 2021
Consistency Regularization for Adversarial Robustness
Consistency Regularization for Adversarial Robustness
Jihoon Tack
Sihyun Yu
Jongheon Jeong
Minseon Kim
Sung Ju Hwang
Jinwoo Shin
AAML
41
57
0
08 Mar 2021
Towards Evaluating the Robustness of Deep Diagnostic Models by
  Adversarial Attack
Towards Evaluating the Robustness of Deep Diagnostic Models by Adversarial Attack
Mengting Xu
Tao Zhang
Zhongnian Li
Mingxia Liu
Daoqiang Zhang
AAML
OOD
MedIm
33
41
0
05 Mar 2021
WaveGuard: Understanding and Mitigating Audio Adversarial Examples
WaveGuard: Understanding and Mitigating Audio Adversarial Examples
Shehzeen Samarah Hussain
Paarth Neekhara
Shlomo Dubnov
Julian McAuley
F. Koushanfar
AAML
33
71
0
04 Mar 2021
QAIR: Practical Query-efficient Black-Box Attacks for Image Retrieval
QAIR: Practical Query-efficient Black-Box Attacks for Image Retrieval
Xiaodan Li
Jinfeng Li
YueFeng Chen
Shaokai Ye
Yuan He
Shuhui Wang
Hang Su
Hui Xue
27
44
0
04 Mar 2021
Automated Discovery of Adaptive Attacks on Adversarial Defenses
Automated Discovery of Adaptive Attacks on Adversarial Defenses
Chengyuan Yao
Pavol Bielik
Petar Tsankov
Martin Vechev
AAML
19
24
0
23 Feb 2021
Low Curvature Activations Reduce Overfitting in Adversarial Training
Low Curvature Activations Reduce Overfitting in Adversarial Training
Vasu Singla
Sahil Singla
David Jacobs
S. Feizi
AAML
43
45
0
15 Feb 2021
CAP-GAN: Towards Adversarial Robustness with Cycle-consistent
  Attentional Purification
CAP-GAN: Towards Adversarial Robustness with Cycle-consistent Attentional Purification
Mingu Kang
T. Tran
Seungju Cho
Daeyoung Kim
AAML
27
3
0
15 Feb 2021
Resilient Machine Learning for Networked Cyber Physical Systems: A
  Survey for Machine Learning Security to Securing Machine Learning for CPS
Resilient Machine Learning for Networked Cyber Physical Systems: A Survey for Machine Learning Security to Securing Machine Learning for CPS
Felix O. Olowononi
D. Rawat
Chunmei Liu
38
134
0
14 Feb 2021
"What's in the box?!": Deflecting Adversarial Attacks by Randomly
  Deploying Adversarially-Disjoint Models
"What's in the box?!": Deflecting Adversarial Attacks by Randomly Deploying Adversarially-Disjoint Models
Sahar Abdelnabi
Mario Fritz
AAML
27
7
0
09 Feb 2021
Recent Advances in Adversarial Training for Adversarial Robustness
Recent Advances in Adversarial Training for Adversarial Robustness
Tao Bai
Jinqi Luo
Jun Zhao
Bihan Wen
Qian Wang
AAML
86
476
0
02 Feb 2021
Detecting Adversarial Examples by Input Transformations, Defense
  Perturbations, and Voting
Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting
F. Nesti
Alessandro Biondi
Giorgio Buttazzo
AAML
15
39
0
27 Jan 2021
Generalizing Adversarial Examples by AdaBelief Optimizer
Generalizing Adversarial Examples by AdaBelief Optimizer
Yixiang Wang
Jiqiang Liu
Xiaolin Chang
AAML
22
1
0
25 Jan 2021
Error Diffusion Halftoning Against Adversarial Examples
Error Diffusion Halftoning Against Adversarial Examples
Shao-Yuan Lo
Vishal M. Patel
DiffM
15
4
0
23 Jan 2021
The Vulnerability of Semantic Segmentation Networks to Adversarial
  Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
Andreas Bär
Jonas Löhdefink
Nikhil Kapoor
Serin Varghese
Fabian Hüger
Peter Schlicht
Tim Fingscheidt
AAML
111
33
0
11 Jan 2021
Robust Text CAPTCHAs Using Adversarial Examples
Robust Text CAPTCHAs Using Adversarial Examples
Rulin Shao
Zhouxing Shi
Jinfeng Yi
Pin-Yu Chen
Cho-Jui Hsieh
AAML
32
16
0
07 Jan 2021
Local Competition and Stochasticity for Adversarial Robustness in Deep
  Learning
Local Competition and Stochasticity for Adversarial Robustness in Deep Learning
Konstantinos P. Panousis
S. Chatzis
Antonios Alexos
Sergios Theodoridis
BDL
AAML
OOD
58
19
0
04 Jan 2021
Achieving Adversarial Robustness Requires An Active Teacher
Achieving Adversarial Robustness Requires An Active Teacher
Chao Ma
Lexing Ying
27
1
0
14 Dec 2020
Boosting Adversarial Attacks on Neural Networks with Better Optimizer
Boosting Adversarial Attacks on Neural Networks with Better Optimizer
Heng Yin
Hengwei Zhang
Jin-dong Wang
Ruiyu Dou
AAML
35
8
0
01 Dec 2020
Guided Adversarial Attack for Evaluating and Enhancing Adversarial
  Defenses
Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses
Gaurang Sriramanan
Sravanti Addepalli
Arya Baburaj
R. Venkatesh Babu
AAML
28
92
0
30 Nov 2020
Almost Tight L0-norm Certified Robustness of Top-k Predictions against
  Adversarial Perturbations
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations
Jinyuan Jia
Binghui Wang
Xiaoyu Cao
Hongbin Liu
Neil Zhenqiang Gong
21
24
0
15 Nov 2020
Recent Advances in Understanding Adversarial Robustness of Deep Neural
  Networks
Recent Advances in Understanding Adversarial Robustness of Deep Neural Networks
Tao Bai
Jinqi Luo
Jun Zhao
AAML
49
8
0
03 Nov 2020
The Vulnerability of the Neural Networks Against Adversarial Examples in
  Deep Learning Algorithms
The Vulnerability of the Neural Networks Against Adversarial Examples in Deep Learning Algorithms
Rui Zhao
AAML
34
1
0
02 Nov 2020
Towards Robust Neural Networks via Orthogonal Diversity
Towards Robust Neural Networks via Orthogonal Diversity
Kun Fang
Qinghua Tao
Yingwen Wu
Tao Li
Jia Cai
Feipeng Cai
Xiaolin Huang
Jie Yang
AAML
41
8
0
23 Oct 2020
Learning Black-Box Attackers with Transferable Priors and Query Feedback
Learning Black-Box Attackers with Transferable Priors and Query Feedback
Jiancheng Yang
Yangzhou Jiang
Xiaoyang Huang
Bingbing Ni
Chenglong Zhao
AAML
18
81
0
21 Oct 2020
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
234
681
0
19 Oct 2020
Uncovering the Limits of Adversarial Training against Norm-Bounded
  Adversarial Examples
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Sven Gowal
Chongli Qin
J. Uesato
Timothy A. Mann
Pushmeet Kohli
AAML
22
325
0
07 Oct 2020
An Empirical Study of DNNs Robustification Inefficacy in Protecting
  Visual Recommenders
An Empirical Study of DNNs Robustification Inefficacy in Protecting Visual Recommenders
Vito Walter Anelli
Tommaso Di Noia
Daniele Malitesta
Felice Antonio Merra
AAML
27
2
0
02 Oct 2020
Block-wise Image Transformation with Secret Key for Adversarially Robust
  Defense
Block-wise Image Transformation with Secret Key for Adversarially Robust Defense
Maungmaung Aprilpyone
Hitoshi Kiya
29
57
0
02 Oct 2020
Certifying Confidence via Randomized Smoothing
Certifying Confidence via Randomized Smoothing
Aounon Kumar
Alexander Levine
S. Feizi
Tom Goldstein
UQCV
36
39
0
17 Sep 2020
Quantifying the Preferential Direction of the Model Gradient in
  Adversarial Training With Projected Gradient Descent
Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent
Ricardo Bigolin Lanfredi
Joyce D. Schroeder
Tolga Tasdizen
27
11
0
10 Sep 2020
SoK: Certified Robustness for Deep Neural Networks
SoK: Certified Robustness for Deep Neural Networks
Linyi Li
Tao Xie
Bo-wen Li
AAML
35
128
0
09 Sep 2020
Adversarial Machine Learning in Image Classification: A Survey Towards
  the Defender's Perspective
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective
G. R. Machado
Eugênio Silva
R. Goldschmidt
AAML
33
157
0
08 Sep 2020
On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks
On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks
Deboleena Roy
I. Chakraborty
Timur Ibrayev
Kaushik Roy
AAML
19
4
0
27 Aug 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
32
73
0
07 Aug 2020
RANDOM MASK: Towards Robust Convolutional Neural Networks
RANDOM MASK: Towards Robust Convolutional Neural Networks
Tiange Luo
Tianle Cai
Mengxiao Zhang
Siyu Chen
Liwei Wang
AAML
OOD
24
17
0
27 Jul 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
221
0
21 Jul 2020
AdvFoolGen: Creating Persistent Troubles for Deep Classifiers
AdvFoolGen: Creating Persistent Troubles for Deep Classifiers
Yuzhen Ding
Nupur Thakur
Baoxin Li
AAML
24
3
0
20 Jul 2020
A Survey on Security Attacks and Defense Techniques for Connected and
  Autonomous Vehicles
A Survey on Security Attacks and Defense Techniques for Connected and Autonomous Vehicles
M. Pham
Kaiqi Xiong
25
138
0
16 Jul 2020
Patch-wise Attack for Fooling Deep Neural Network
Patch-wise Attack for Fooling Deep Neural Network
Lianli Gao
Qilong Zhang
Jingkuan Song
Xianglong Liu
Heng Tao Shen
AAML
32
138
0
14 Jul 2020
Previous
1234567
Next