Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1610.08401
Cited By
Universal adversarial perturbations
26 October 2016
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Universal adversarial perturbations"
50 / 1,267 papers shown
Title
Universal Adversarial Audio Perturbations
Sajjad Abdoli
L. G. Hafemann
Jérôme Rony
Ismail Ben Ayed
P. Cardinal
Alessandro Lameiras Koerich
AAML
25
51
0
08 Aug 2019
Random Directional Attack for Fooling Deep Neural Networks
Wenjian Luo
Chenwang Wu
Nan Zhou
Li Ni
AAML
14
4
0
06 Aug 2019
Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query Complexity
Feihu Huang
Shangqian Gao
J. Pei
Heng-Chiao Huang
15
8
0
30 Jul 2019
Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment
Di Jin
Zhijing Jin
Qiufeng Wang
Peter Szolovits
SILM
AAML
29
1,053
0
27 Jul 2019
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
Haichao Zhang
Jianyu Wang
AAML
25
230
0
24 Jul 2019
Enhancing Adversarial Example Transferability with an Intermediate Level Attack
Qian Huang
Isay Katsman
Horace He
Zeqi Gu
Serge J. Belongie
Ser-Nam Lim
SILM
AAML
8
240
0
23 Jul 2019
Characterizing Attacks on Deep Reinforcement Learning
Xinlei Pan
Chaowei Xiao
Warren He
Shuang Yang
Jian Peng
...
Jinfeng Yi
Zijiang Yang
Mingyan D. Liu
Bo Li
D. Song
AAML
22
69
0
21 Jul 2019
Constrained Concealment Attacks against Reconstruction-based Anomaly Detectors in Industrial Control Systems
Alessandro Erba
Riccardo Taormina
S. Galelli
Marcello Pogliani
Michele Carminati
S. Zanero
Nils Ole Tippenhauer
AAML
28
22
0
17 Jul 2019
Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning
Bao Wang
Stanley J. Osher
AAML
AI4CE
42
10
0
16 Jul 2019
Unsupervised Adversarial Attacks on Deep Feature-based Retrieval with GAN
Guoping Zhao
Mingyu Zhang
Jiajun Liu
Ji-Rong Wen
AAML
GAN
22
25
0
12 Jul 2019
Why Blocking Targeted Adversarial Perturbations Impairs the Ability to Learn
Ziv Katzir
Yuval Elovici
AAML
14
3
0
11 Jul 2019
Camera Exposure Control for Robust Robot Vision with Noise-Aware Image Quality Assessment
Ukcheol Shin
Jinsun Park
Gyumin Shim
François Rameau
In So Kweon
17
25
0
11 Jul 2019
Fooling a Real Car with Adversarial Traffic Signs
N. Morgulis
Alexander Kreines
Shachar Mendelowitz
Yuval Weisglass
AAML
16
91
0
30 Jun 2019
Robustness Guarantees for Deep Neural Networks on Videos
Min Wu
Marta Z. Kwiatkowska
AAML
19
22
0
28 Jun 2019
Evolving Robust Neural Architectures to Defend from Adversarial Attacks
Shashank Kotyan
Danilo Vasconcellos Vargas
OOD
AAML
24
36
0
27 Jun 2019
Verifying Robustness of Gradient Boosted Models
Gil Einziger
M. Goldstein
Yaniv Saár
Itai Segall
31
41
0
26 Jun 2019
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Soheil Kolouri
Aniruddha Saha
Hamed Pirsiavash
Heiko Hoffmann
AAML
33
231
0
26 Jun 2019
Are Adversarial Perturbations a Showstopper for ML-Based CAD? A Case Study on CNN-Based Lithographic Hotspot Detection
Kang Liu
Haoyu Yang
Yuzhe Ma
Benjamin Tan
Bei Yu
Evangeline F. Y. Young
Ramesh Karri
S. Garg
AAML
20
10
0
25 Jun 2019
Hiding Faces in Plain Sight: Disrupting AI Face Synthesis with Adversarial Perturbations
Yuezun Li
Xin Yang
Baoyuan Wu
Siwei Lyu
AAML
PICV
CVBM
26
38
0
21 Jun 2019
Evolution Attack On Neural Networks
Yigui Luo
Ruijia Yang
Wei Sha
Weiyi Ding
YouTeng Sun
Yisi Wang
AAML
9
0
0
21 Jun 2019
Adversarial attacks on Copyright Detection Systems
Parsa Saadatpanah
Ali Shafahi
Tom Goldstein
AAML
14
33
0
17 Jun 2019
The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks
F. Assion
Peter Schlicht
Florens Greßner
W. Günther
Fabian Hüger
Nico M. Schmidt
Umair Rasheed
AAML
25
14
0
17 Jun 2019
Representation Quality Of Neural Networks Links To Adversarial Attacks and Defences
Shashank Kotyan
Danilo Vasconcellos Vargas
Moe Matsuki
12
0
0
15 Jun 2019
Adversarial Robustness Assessment: Why both
L
0
L_0
L
0
and
L
∞
L_\infty
L
∞
Attacks Are Necessary
Shashank Kotyan
Danilo Vasconcellos Vargas
AAML
17
8
0
14 Jun 2019
Mimic and Fool: A Task Agnostic Adversarial Attack
Akshay Chaturvedi
Utpal Garain
AAML
16
26
0
11 Jun 2019
Evolutionary Trigger Set Generation for DNN Black-Box Watermarking
Jiabao Guo
M. Potkonjak
AAML
WIGM
32
15
0
11 Jun 2019
There is no Artificial General Intelligence
Jobst Landgrebe
B. Smith
AI4CE
14
8
0
09 Jun 2019
On the Vulnerability of Capsule Networks to Adversarial Attacks
Félix D. P. Michels
Tobias Uelwer
Eric Upschulte
Stefan Harmeling
AAML
30
24
0
09 Jun 2019
Adversarial Attack Generation Empowered by Min-Max Optimization
Jingkang Wang
Tianyun Zhang
Sijia Liu
Pin-Yu Chen
Jiacen Xu
M. Fardad
Yangqiu Song
AAML
30
35
0
09 Jun 2019
Sensitivity of Deep Convolutional Networks to Gabor Noise
Kenneth T. Co
Luis Muñoz-González
Emil C. Lupu
AAML
14
6
0
08 Jun 2019
Defending Against Universal Attacks Through Selective Feature Regeneration
Tejas S. Borkar
Felix Heide
Lina Karam
AAML
23
1
0
08 Jun 2019
Robust Attacks against Multiple Classifiers
Juan C. Perdomo
Yaron Singer
AAML
18
10
0
06 Jun 2019
Should Adversarial Attacks Use Pixel p-Norm?
Ayon Sen
Xiaojin Zhu
Liam Marshall
Robert D. Nowak
12
21
0
06 Jun 2019
Adversarial Training is a Form of Data-dependent Operator Norm Regularization
Kevin Roth
Yannic Kilcher
Thomas Hofmann
17
13
0
04 Jun 2019
What do AI algorithms actually learn? - On false structures in deep learning
L. Thesing
Vegard Antun
A. Hansen
6
21
0
04 Jun 2019
Fast and Stable Interval Bounds Propagation for Training Verifiably Robust Models
P. Morawiecki
Przemysław Spurek
Marek Śmieja
Jacek Tabor
AAML
OOD
27
8
0
03 Jun 2019
Real-Time Adversarial Attacks
Yuan Gong
Boyang Li
C. Poellabauer
Yiyu Shi
AAML
19
55
0
31 May 2019
Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward
A. Qayyum
Muhammad Usama
Junaid Qadir
Ala I. Al-Fuqaha
AAML
27
187
0
29 May 2019
Provably scale-covariant continuous hierarchical networks based on scale-normalized differential expressions coupled in cascade
T. Lindeberg
27
19
0
29 May 2019
A backdoor attack against LSTM-based text classification systems
Jiazhu Dai
Chuanshuai Chen
SILM
25
320
0
29 May 2019
Cross-Domain Transferability of Adversarial Perturbations
Muzammal Naseer
Salman H. Khan
M. H. Khan
Fahad Shahbaz Khan
Fatih Porikli
AAML
33
145
0
28 May 2019
Label Universal Targeted Attack
Naveed Akhtar
M. Jalwana
Bennamoun
Ajmal Mian
AAML
25
5
0
27 May 2019
Body Shape Privacy in Images: Understanding Privacy and Preventing Automatic Shape Extraction
Hosnieh Sattar
Katharina Krombholz
Gerard Pons-Moll
Mario Fritz
3DH
30
3
0
27 May 2019
Combating Label Noise in Deep Learning Using Abstention
S. Thulasidasan
Tanmoy Bhattacharya
J. Bilmes
Gopinath Chennupati
J. Mohd-Yusof
NoLa
22
178
0
27 May 2019
Rearchitecting Classification Frameworks For Increased Robustness
Varun Chandrasekaran
Brian Tang
Nicolas Papernot
Kassem Fawaz
S. Jha
Xi Wu
AAML
OOD
42
8
0
26 May 2019
Regula Sub-rosa: Latent Backdoor Attacks on Deep Neural Networks
Yuanshun Yao
Huiying Li
Haitao Zheng
Ben Y. Zhao
AAML
35
13
0
24 May 2019
Thwarting finite difference adversarial attacks with output randomization
Haidar Khan
Daniel Park
Azer Khan
B. Yener
SILM
AAML
41
0
0
23 May 2019
A Direct Approach to Robust Deep Learning Using Adversarial Networks
Huaxia Wang
Chun-Nam Yu
GAN
AAML
OOD
24
77
0
23 May 2019
Taking Care of The Discretization Problem: A Comprehensive Study of the Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer Domain
Lei Bu
Yuchao Duan
Fu Song
Zhe Zhao
AAML
37
18
0
19 May 2019
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
19
5
0
19 May 2019
Previous
1
2
3
...
19
20
21
...
24
25
26
Next