Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2209.07263
Cited By
Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)
15 September 2022
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
Volkan Cevher
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)"
48 / 48 papers shown
Title
Approach to Finding a Robust Deep Learning Model
Alexey Boldyrev
Fedor Ratnikov
Andrey Shevelev
OOD
147
0
0
22 May 2025
On the (Non-)Robustness of Two-Layer Neural Networks in Different Learning Regimes
Elvis Dohmatob
A. Bietti
AAML
51
13
0
22 Mar 2022
Probabilistically Robust Learning: Balancing Average- and Worst-case Performance
Alexander Robey
Luiz F. O. Chamon
George J. Pappas
Hamed Hassani
AAML
OOD
55
43
0
02 Feb 2022
The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression
Hamed Hassani
Adel Javanmard
AAML
19
36
0
13 Jan 2022
Subquadratic Overparameterization for Shallow Neural Networks
Chaehwan Song
Ali Ramezani-Kebrya
Thomas Pethick
Armin Eftekhari
Volkan Cevher
56
31
0
02 Nov 2021
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
Hanxun Huang
Yisen Wang
S. Erfani
Quanquan Gu
James Bailey
Xingjun Ma
AAML
TPM
79
102
0
07 Oct 2021
A Universal Law of Robustness via Isoperimetry
Sébastien Bubeck
Mark Sellke
38
216
0
26 May 2021
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Mo Zhou
Rong Ge
Chi Jin
91
46
0
04 Feb 2021
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks
Quynh N. Nguyen
Marco Mondelli
Guido Montúfar
42
82
0
21 Dec 2020
Do Wider Neural Networks Really Help Adversarial Robustness?
Boxi Wu
Jinghui Chen
Deng Cai
Xiaofei He
Quanquan Gu
AAML
42
95
0
03 Oct 2020
A law of robustness for two-layers neural networks
Sébastien Bubeck
Yuanzhi Li
Dheeraj M. Nagaraj
55
57
0
30 Sep 2020
Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS
Lin Chen
Sheng Xu
112
94
0
22 Sep 2020
Phase diagram for two-layer ReLU neural networks at infinite-width limit
Yaoyu Zhang
Zhi-Qin John Xu
Zheng Ma
Yaoyu Zhang
50
61
0
15 Jul 2020
On the Similarity between the Laplace and Neural Tangent Kernels
Amnon Geifman
A. Yadav
Yoni Kasten
Meirav Galun
David Jacobs
Ronen Basri
105
94
0
03 Jul 2020
Robust Recovery via Implicit Bias of Discrepant Learning Rates for Double Over-parameterization
Chong You
Zhihui Zhu
Qing Qu
Yi-An Ma
26
42
0
16 Jun 2020
The Effects of Mild Over-parameterization on the Optimization Landscape of Shallow ReLU Neural Networks
Itay Safran
Gilad Yehudai
Ohad Shamir
115
34
0
01 Jun 2020
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
562
41,706
0
28 May 2020
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLT
AAML
61
150
0
20 May 2020
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
209
1,835
0
03 Mar 2020
Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models
Xiao Zhang
Jinghui Chen
Quanquan Gu
David Evans
41
17
0
01 Mar 2020
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality
Yi Zhang
Orestis Plevrakis
S. Du
Xingguo Li
Zhao Song
Sanjeev Arora
98
51
0
16 Feb 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
96
274
0
03 Feb 2020
An Analysis of the Expressiveness of Deep Neural Network Architectures Based on Their Lipschitz Constants
Siqi Zhou
Angela P. Schoellig
24
12
0
24 Dec 2019
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?
Zixiang Chen
Yuan Cao
Difan Zou
Quanquan Gu
59
122
0
27 Nov 2019
Convergence of Adversarial Training in Overparametrized Neural Networks
Ruiqi Gao
Tianle Cai
Haochuan Li
Liwei Wang
Cho-Jui Hsieh
Jason D. Lee
AAML
89
108
0
19 Jun 2019
Kernel and Rich Regimes in Overparametrized Models
Blake E. Woodworth
Suriya Gunasekar
Pedro H. P. Savarese
E. Moroshko
Itay Golan
Jason D. Lee
Daniel Soudry
Nathan Srebro
71
363
0
13 Jun 2019
Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks
Yuan Cao
Quanquan Gu
MLT
AI4CE
76
389
0
30 May 2019
On the Inductive Bias of Neural Tangent Kernels
A. Bietti
Julien Mairal
63
257
0
29 May 2019
Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
68
278
0
16 Feb 2019
Towards moderate overparameterization: global convergence guarantees for training shallow neural networks
Samet Oymak
Mahdi Soltanolkotabi
48
321
0
12 Feb 2019
On Lazy Training in Differentiable Programming
Lénaïc Chizat
Edouard Oyallon
Francis R. Bach
100
832
0
19 Dec 2018
A Convergence Theory for Deep Learning via Over-Parameterization
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
AI4CE
ODL
218
1,461
0
09 Nov 2018
Gradient Descent Finds Global Minima of Deep Neural Networks
S. Du
Jason D. Lee
Haochuan Li
Liwei Wang
Masayoshi Tomizuka
ODL
168
1,134
0
09 Nov 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
179
1,269
0
04 Oct 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
224
3,191
0
20 Jun 2018
Algorithmic Regularization in Learning Deep Homogeneous Models: Layers are Automatically Balanced
S. Du
Wei Hu
Jason D. Lee
MLT
123
242
0
04 Jun 2018
On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
Lénaïc Chizat
Francis R. Bach
OT
177
734
0
24 May 2018
Adversarially Robust Generalization Requires More Data
Ludwig Schmidt
Shibani Santurkar
Dimitris Tsipras
Kunal Talwar
Aleksander Madry
OOD
AAML
120
789
0
30 Apr 2018
A Mean Field View of the Landscape of Two-Layers Neural Networks
Song Mei
Andrea Montanari
Phan-Minh Nguyen
MLT
81
856
0
18 Apr 2018
Gradient Descent Quantizes ReLU Network Features
Hartmut Maennel
Olivier Bousquet
Sylvain Gelly
MLT
43
82
0
22 Mar 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
180
3,180
0
01 Feb 2018
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach
Tsui-Wei Weng
Huan Zhang
Pin-Yu Chen
Jinfeng Yi
D. Su
Yupeng Gao
Cho-Jui Hsieh
Luca Daniel
AAML
76
467
0
31 Jan 2018
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
255
12,023
0
19 Jun 2017
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
Matthias Hein
Maksym Andriushchenko
AAML
81
511
0
23 May 2017
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.7K
193,390
0
10 Dec 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
268
18,583
0
06 Feb 2015
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
222
19,011
0
20 Dec 2014
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
227
14,893
1
21 Dec 2013
1