Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1906.01527
Cited By
v1
v2
v3
v4
v5 (latest)
Adversarial Training is a Form of Data-dependent Operator Norm Regularization
4 June 2019
Kevin Roth
Yannic Kilcher
Thomas Hofmann
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Adversarial Training is a Form of Data-dependent Operator Norm Regularization"
45 / 45 papers shown
Title
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
Kevin Roth
Yannic Kilcher
Thomas Hofmann
AAML
52
175
0
13 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
160
2,051
0
08 Feb 2019
Generalizable Adversarial Training via Spectral Normalization
Farzan Farnia
Jesse M. Zhang
David Tse
OOD
AAML
78
140
0
19 Nov 2018
A Kernel Perspective for Regularizing Deep Neural Networks
A. Bietti
Grégoire Mialon
Dexiong Chen
Julien Mairal
61
15
0
30 Sep 2018
Adversarial examples from computational constraints
Sébastien Bubeck
Eric Price
Ilya P. Razenshteyn
AAML
124
233
0
25 May 2018
Adversarially Robust Generalization Requires More Data
Ludwig Schmidt
Shibani Santurkar
Dimitris Tsipras
Kunal Talwar
Aleksander Madry
OOD
AAML
155
795
0
30 Apr 2018
Sensitivity and Generalization in Neural Networks: an Empirical Study
Roman Novak
Yasaman Bahri
Daniel A. Abolafia
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
AAML
95
441
0
23 Feb 2018
Adversarial vulnerability for any classifier
Alhussein Fawzi
Hamza Fawzi
Omar Fawzi
AAML
92
251
0
23 Feb 2018
Spectral Normalization for Generative Adversarial Networks
Takeru Miyato
Toshiki Kataoka
Masanori Koyama
Yuichi Yoshida
ODL
159
4,444
0
16 Feb 2018
Stronger generalization bounds for deep nets via a compression approach
Sanjeev Arora
Rong Ge
Behnam Neyshabur
Yi Zhang
MLT
AI4CE
89
643
0
14 Feb 2018
Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks
Yusuke Tsuzuku
Issei Sato
Masashi Sugiyama
AAML
105
309
0
12 Feb 2018
First-order Adversarial Vulnerability of Neural Networks and Input Dimension
Carl-Johann Simon-Gabriel
Yann Ollivier
Léon Bottou
Bernhard Schölkopf
David Lopez-Paz
AAML
79
48
0
05 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
243
3,194
0
01 Feb 2018
Certified Defenses against Adversarial Examples
Aditi Raghunathan
Jacob Steinhardt
Percy Liang
AAML
113
969
0
29 Jan 2018
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
A. Ross
Finale Doshi-Velez
AAML
154
683
0
26 Nov 2017
Provable defenses against adversarial examples via the convex outer adversarial polytope
Eric Wong
J. Zico Kolter
AAML
128
1,504
0
02 Nov 2017
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
163
2,159
0
21 Aug 2017
Spectrally-normalized margin bounds for neural networks
Peter L. Bartlett
Dylan J. Foster
Matus Telgarsky
ODL
210
1,225
0
26 Jun 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
317
12,131
0
19 Jun 2017
Spectral Norm Regularization for Improving the Generalizability of Deep Learning
Yuichi Yoshida
Takeru Miyato
83
334
0
31 May 2017
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
Matthias Hein
Maksym Andriushchenko
AAML
115
512
0
23 May 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
131
1,864
0
20 May 2017
Parseval Networks: Improving Robustness to Adversarial Examples
Moustapha Cissé
Piotr Bojanowski
Edouard Grave
Yann N. Dauphin
Nicolas Usunier
AAML
145
808
0
28 Apr 2017
Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
Takeru Miyato
S. Maeda
Masanori Koyama
S. Ishii
GAN
151
2,738
0
13 Apr 2017
Detecting Adversarial Samples from Artifacts
Reuben Feinman
Ryan R. Curtin
S. Shintre
Andrew B. Gardner
AAML
93
894
0
01 Mar 2017
On the (Statistical) Detection of Adversarial Examples
Kathrin Grosse
Praveen Manoharan
Nicolas Papernot
Michael Backes
Patrick McDaniel
AAML
78
714
0
21 Feb 2017
On Detecting Adversarial Perturbations
J. H. Metzen
Tim Genewein
Volker Fischer
Bastian Bischoff
AAML
63
950
0
14 Feb 2017
Universal adversarial perturbations
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
150
2,533
0
26 Oct 2016
Variance-based regularization with convex objectives
John C. Duchi
Hongseok Namkoong
76
351
0
08 Oct 2016
Robustness of classifiers: from adversarial to random noise
Alhussein Fawzi
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
94
376
0
31 Aug 2016
A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples
T. Tanay
Lewis D. Griffin
AAML
85
272
0
27 Aug 2016
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
545
5,910
0
08 Jul 2016
On the Expressive Power of Deep Neural Networks
M. Raghu
Ben Poole
Jon M. Kleinberg
Surya Ganguli
Jascha Narain Sohl-Dickstein
63
790
0
16 Jun 2016
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
SILM
AAML
116
1,741
0
24 May 2016
A Unified Gradient Regularization Family for Adversarial Examples
Chunchuan Lyu
Kaizhu Huang
Hai-Ning Liang
AAML
68
209
0
19 Nov 2015
Adversarial Manipulation of Deep Representations
S. Sabour
Yanshuai Cao
Fartash Faghri
David J. Fleet
GAN
AAML
73
286
0
16 Nov 2015
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
154
4,905
0
14 Nov 2015
Distributional Smoothing with Virtual Adversarial Training
Takeru Miyato
S. Maeda
Masanori Koyama
Ken Nakae
S. Ishii
91
458
0
02 Jul 2015
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
292
591
0
27 Feb 2015
Analysis of classifiers' robustness to adversarial perturbations
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
95
361
0
09 Feb 2015
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
282
19,121
0
20 Dec 2014
Towards Deep Neural Network Architectures Robust to Adversarial Examples
S. Gu
Luca Rigazio
AAML
76
844
0
11 Dec 2014
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
1.7K
100,508
0
04 Sep 2014
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
282
14,963
1
21 Dec 2013
Robustness and Regularization of Support Vector Machines
Huan Xu
Constantine Caramanis
Shie Mannor
147
471
0
25 Mar 2008
1