ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.00401
  4. Cited By
Advances in adversarial attacks and defenses in computer vision: A
  survey
v1v2 (latest)

Advances in adversarial attacks and defenses in computer vision: A survey

1 August 2021
Naveed Akhtar
Ajmal Mian
Navid Kardan
M. Shah
    AAML
ArXiv (abs)PDFHTML

Papers citing "Advances in adversarial attacks and defenses in computer vision: A survey"

50 / 335 papers shown
Title
Connecting the Dots: Detecting Adversarial Perturbations Using Context
  Inconsistency
Connecting the Dots: Detecting Adversarial Perturbations Using Context Inconsistency
Shasha Li
Shitong Zhu
Sudipta Paul
Amit K. Roy-Chowdhury
Chengyu Song
S. Krishnamurthy
A. Swami
Kevin S. Chan
AAML
111
36
0
19 Jul 2020
Do Adversarially Robust ImageNet Models Transfer Better?
Do Adversarially Robust ImageNet Models Transfer Better?
Hadi Salman
Andrew Ilyas
Logan Engstrom
Ashish Kapoor
Aleksander Madry
106
426
0
16 Jul 2020
AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing
  Flows
AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows
H. M. Dolatabadi
S. Erfani
C. Leckie
AAML
111
66
0
15 Jul 2020
Adversarial robustness via robust low rank representations
Adversarial robustness via robust low rank representations
Pranjal Awasthi
Himanshu Jain
A. S. Rawat
Aravindan Vijayaraghavan
AAML
46
23
0
13 Jul 2020
Understanding Adversarial Examples from the Mutual Influence of Images
  and Perturbations
Understanding Adversarial Examples from the Mutual Influence of Images and Perturbations
Chaoning Zhang
Philipp Benz
Tooba Imtiaz
In-So Kweon
SSLAAML
81
119
0
13 Jul 2020
Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial
  Test Examples
Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples
S. Goldwasser
Adam Tauman Kalai
Y. Kalai
Omar Montasser
AAML
68
41
0
10 Jul 2020
Improving Adversarial Robustness by Enforcing Local and Global
  Compactness
Improving Adversarial Robustness by Enforcing Local and Global Compactness
Anh-Vu Bui
Trung Le
He Zhao
Paul Montague
O. deVel
Tamas Abraham
Dinh Q. Phung
AAML
57
24
0
10 Jul 2020
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
Hongyi Wang
Kartik K. Sreenivasan
Shashank Rajput
Harit Vishwakarma
Saurabh Agarwal
Jy-yong Sohn
Kangwook Lee
Dimitris Papailiopoulos
FedML
91
612
0
09 Jul 2020
Understanding and Improving Fast Adversarial Training
Understanding and Improving Fast Adversarial Training
Maksym Andriushchenko
Nicolas Flammarion
AAML
88
293
0
06 Jul 2020
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Yunfei Liu
Xingjun Ma
James Bailey
Feng Lu
AAML
105
518
0
05 Jul 2020
Adversarial Example Games
Adversarial Example Games
A. Bose
Gauthier Gidel
Hugo Berrard
Andre Cianflone
Pascal Vincent
Simon Lacoste-Julien
William L. Hamilton
AAMLGAN
97
52
0
01 Jul 2020
Biologically Inspired Mechanisms for Adversarial Robustness
Biologically Inspired Mechanisms for Adversarial Robustness
M. V. Reddy
Andrzej Banburski
Nishka Pant
T. Poggio
AAML
70
46
0
29 Jun 2020
GNNGuard: Defending Graph Neural Networks against Adversarial Attacks
GNNGuard: Defending Graph Neural Networks against Adversarial Attacks
Xiang Zhang
Marinka Zitnik
AAML
100
297
0
15 Jun 2020
Targeted Adversarial Perturbations for Monocular Depth Prediction
Targeted Adversarial Perturbations for Monocular Depth Prediction
A. Wong
Safa Cicek
Stefano Soatto
AAMLMDE
60
45
0
12 Jun 2020
On the Tightness of Semidefinite Relaxations for Certifying Robustness
  to Adversarial Examples
On the Tightness of Semidefinite Relaxations for Certifying Robustness to Adversarial Examples
Richard Y. Zhang
AAML
51
26
0
11 Jun 2020
Large-Scale Adversarial Training for Vision-and-Language Representation
  Learning
Large-Scale Adversarial Training for Vision-and-Language Representation Learning
Zhe Gan
Yen-Chun Chen
Linjie Li
Chen Zhu
Yu Cheng
Jingjing Liu
ObjDVLM
91
498
0
11 Jun 2020
A Self-supervised Approach for Adversarial Robustness
A Self-supervised Approach for Adversarial Robustness
Muzammal Naseer
Salman Khan
Munawar Hayat
Fahad Shahbaz Khan
Fatih Porikli
AAML
87
260
0
08 Jun 2020
QEBA: Query-Efficient Boundary-Based Blackbox Attack
QEBA: Query-Efficient Boundary-Based Blackbox Attack
Huichen Li
Xiaojun Xu
Xiaolu Zhang
Shuang Yang
Yue Liu
AAML
123
183
0
28 May 2020
Projection & Probability-Driven Black-Box Attack
Projection & Probability-Driven Black-Box Attack
Jie Li
Rongrong Ji
Hong Liu
Jianzhuang Liu
Bineng Zhong
Cheng Deng
Q. Tian
AAML
69
49
0
08 May 2020
Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder
Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder
Guanlin Li
Shuya Ding
Jun Luo
Chang-rui Liu
AAML
107
19
0
06 May 2020
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Pu Zhao
Pin-Yu Chen
Payel Das
Karthikeyan N. Ramamurthy
Xue Lin
AAML
137
191
0
30 Apr 2020
Transferable Perturbations of Deep Feature Distributions
Transferable Perturbations of Deep Feature Distributions
Nathan Inkawhich
Kevin J. Liang
Lawrence Carin
Yiran Chen
AAML
71
87
0
27 Apr 2020
Ensemble Generative Cleaning with Feedback Loops for Defending
  Adversarial Attacks
Ensemble Generative Cleaning with Feedback Loops for Defending Adversarial Attacks
Jianhe Yuan
Zhihai He
AAML
59
22
0
23 Apr 2020
EMPIR: Ensembles of Mixed Precision Deep Networks for Increased
  Robustness against Adversarial Attacks
EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks
Sanchari Sen
Balaraman Ravindran
A. Raghunathan
FedMLAAML
61
63
0
21 Apr 2020
Single-step Adversarial training with Dropout Scheduling
Single-step Adversarial training with Dropout Scheduling
S. VivekB.
R. Venkatesh Babu
OODAAML
62
73
0
18 Apr 2020
Targeted Attack for Deep Hashing based Retrieval
Targeted Attack for Deep Hashing based Retrieval
Jiawang Bai
Bin Chen
Yiming Li
Dongxian Wu
Weiwei Guo
Shutao Xia
En-Hui Yang
AAML
124
85
0
15 Apr 2020
Towards Transferable Adversarial Attack against Deep Face Recognition
Towards Transferable Adversarial Attack against Deep Face Recognition
Yaoyao Zhong
Weihong Deng
AAML
100
161
0
13 Apr 2020
PatchAttack: A Black-box Texture-based Attack with Reinforcement
  Learning
PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning
Chenglin Yang
Adam Kortylewski
Cihang Xie
Yinzhi Cao
Alan Yuille
AAML
79
109
0
12 Apr 2020
Transferable, Controllable, and Inconspicuous Adversarial Attacks on
  Person Re-identification With Deep Mis-Ranking
Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking
Hongjun Wang
Guangrun Wang
Ya Li
Dongyu Zhang
Liang Lin
AAML
59
85
0
08 Apr 2020
Physically Realizable Adversarial Examples for LiDAR Object Detection
Physically Realizable Adversarial Examples for LiDAR Object Detection
James Tu
Mengye Ren
S. Manivasagam
Ming Liang
Binh Yang
Richard Du
Frank Cheng
R. Urtasun
3DPC
81
241
0
01 Apr 2020
Towards Achieving Adversarial Robustness by Enforcing Feature
  Consistency Across Bit Planes
Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes
Sravanti Addepalli
S. VivekB.
Arya Baburaj
Gaurang Sriramanan
R. Venkatesh Babu
AAML
31
32
0
01 Apr 2020
Policy Teaching via Environment Poisoning: Training-time Adversarial
  Attacks against Reinforcement Learning
Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning
Amin Rakhsha
Goran Radanović
R. Devidze
Xiaojin Zhu
Adish Singla
AAMLOffRL
87
124
0
28 Mar 2020
Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
Tianlong Chen
Sijia Liu
Shiyu Chang
Yu Cheng
Lisa Amini
Zhangyang Wang
AAML
71
250
0
28 Mar 2020
DaST: Data-free Substitute Training for Adversarial Attacks
DaST: Data-free Substitute Training for Adversarial Attacks
Mingyi Zhou
Jing Wu
Yipeng Liu
Shuaicheng Liu
Ce Zhu
81
145
0
28 Mar 2020
Adaptive Reward-Poisoning Attacks against Reinforcement Learning
Adaptive Reward-Poisoning Attacks against Reinforcement Learning
Xuezhou Zhang
Yuzhe Ma
Adish Singla
Xiaojin Zhu
AAML
98
127
0
27 Mar 2020
Adversarial Light Projection Attacks on Face Recognition Systems: A
  Feasibility Study
Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study
Luan Nguyen
Sunpreet S. Arora
Yuhang Wu
Hao Yang
AAML
48
88
0
24 Mar 2020
Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
  of Discrete Input Encoding and Non-Linear Activations
Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations
Saima Sharmin
Nitin Rathi
Priyadarshini Panda
Kaushik Roy
AAML
150
89
0
23 Mar 2020
Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises
Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises
B. Yan
Dong Wang
Huchuan Lu
Xiaoyun Yang
AAML
48
72
0
21 Mar 2020
Adversarial Robustness on In- and Out-Distribution Improves
  Explainability
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
186
102
0
20 Mar 2020
Robust Deep Reinforcement Learning against Adversarial Perturbations on
  State Observations
Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations
Huan Zhang
Hongge Chen
Chaowei Xiao
Yue Liu
Mingyan D. Liu
Duane S. Boning
Cho-Jui Hsieh
AAML
165
274
0
19 Mar 2020
Breaking certified defenses: Semantic adversarial examples with spoofed
  robustness certificates
Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates
Amin Ghiasi
Ali Shafahi
Tom Goldstein
93
55
0
19 Mar 2020
Certified Defenses for Adversarial Patches
Certified Defenses for Adversarial Patches
Ping Yeh-Chiang
Renkun Ni
Ahmed Abdelkader
Chen Zhu
Christoph Studer
Tom Goldstein
AAML
55
171
0
14 Mar 2020
GeoDA: a geometric framework for black-box adversarial attacks
GeoDA: a geometric framework for black-box adversarial attacks
A. Rahmati
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
H. Dai
MLAUAAML
134
120
0
13 Mar 2020
Adversarial Camouflage: Hiding Physical-World Attacks with Natural
  Styles
Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles
Ranjie Duan
Xingjun Ma
Yisen Wang
James Bailey
•. A. K. Qin
Yun Yang
AAML
219
227
0
08 Mar 2020
Adversarial Vertex Mixup: Toward Better Adversarially Robust
  Generalization
Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization
Saehyung Lee
Hyungyu Lee
Sungroh Yoon
AAML
245
117
0
05 Mar 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
  Adversarial Robustness
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OODAAML
122
66
0
02 Mar 2020
Certified Defense to Image Transformations via Randomized Smoothing
Certified Defense to Image Transformations via Randomized Smoothing
Marc Fischer
Maximilian Baader
Martin Vechev
AAML
84
67
0
27 Feb 2020
On Isometry Robustness of Deep 3D Point Cloud Models under Adversarial
  Attacks
On Isometry Robustness of Deep 3D Point Cloud Models under Adversarial Attacks
Yue Zhao
Yuwei Wu
Caihua Chen
A. Lim
3DPC
92
72
0
27 Feb 2020
Adversarial Ranking Attack and Defense
Adversarial Ranking Attack and Defense
Mo Zhou
Zhenxing Niu
Le Wang
Qilin Zhang
G. Hua
137
39
0
26 Feb 2020
(De)Randomized Smoothing for Certifiable Defense against Patch Attacks
(De)Randomized Smoothing for Certifiable Defense against Patch Attacks
Alexander Levine
Soheil Feizi
AAML
92
150
0
25 Feb 2020
Previous
1234567
Next