Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1906.05392
Cited By
v1
v2 (latest)
Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
12 June 2019
Samet Oymak
Zalan Fabian
Mingchen Li
Mahdi Soltanolkotabi
MLT
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian"
50 / 50 papers shown
Title
LoR-VP: Low-Rank Visual Prompting for Efficient Vision Model Adaptation
Can Jin
Ying Li
Mingyu Zhao
Shiyu Zhao
Zhenting Wang
Xiaoxiao He
Ligong Han
Tong Che
Dimitris N. Metaxas
VPVLM
VLM
303
2
0
02 Feb 2025
Parameter-Efficient Fine-Tuning for Foundation Models
Dan Zhang
Tao Feng
Lilong Xue
Yuandong Wang
Yuxiao Dong
J. Tang
220
12
0
23 Jan 2025
Length independent generalization bounds for deep SSM architectures via Rademacher contraction and stability constraints
Dániel Rácz
Mihaly Petreczky
Bálint Daróczy
125
1
0
30 May 2024
Understanding Generalization through Visualizations
Wenjie Huang
Z. Emam
Micah Goldblum
Liam H. Fowl
J. K. Terry
Furong Huang
Tom Goldstein
AI4CE
51
80
0
07 Jun 2019
Deterministic PAC-Bayesian generalization bounds for deep networks via generalizing noise-resilience
Vaishnavh Nagarajan
J. Zico Kolter
93
101
0
30 May 2019
Generalization bounds for deep convolutional neural networks
Philip M. Long
Hanie Sedghi
MLT
109
90
0
29 May 2019
Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems
Atsushi Nitanda
Geoffrey Chinot
Taiji Suzuki
MLT
76
34
0
23 May 2019
Linearized two-layers neural networks in high dimension
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
83
243
0
27 Apr 2019
A Comparative Analysis of the Optimization and Generalization Property of Two-layer Neural Network and Random Feature Models Under Gradient Descent Dynamics
E. Weinan
Chao Ma
Lei Wu
MLT
64
123
0
08 Apr 2019
On the Power and Limitations of Random Features for Understanding Neural Networks
Gilad Yehudai
Ohad Shamir
MLT
118
182
0
01 Apr 2019
Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Mingchen Li
Mahdi Soltanolkotabi
Samet Oymak
NoLa
122
353
0
27 Mar 2019
Two models of double descent for weak features
M. Belkin
Daniel J. Hsu
Ji Xu
117
375
0
18 Mar 2019
Towards moderate overparameterization: global convergence guarantees for training shallow neural networks
Samet Oymak
Mahdi Soltanolkotabi
61
322
0
12 Feb 2019
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
221
974
0
24 Jan 2019
Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits
Xialiang Dou
Tengyuan Liang
MLT
81
42
0
21 Jan 2019
Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path?
Samet Oymak
Mahdi Soltanolkotabi
ODL
55
177
0
25 Dec 2018
On Lazy Training in Differentiable Programming
Lénaïc Chizat
Edouard Oyallon
Francis R. Bach
111
840
0
19 Dec 2018
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks
Difan Zou
Yuan Cao
Dongruo Zhou
Quanquan Gu
ODL
204
448
0
21 Nov 2018
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
201
775
0
12 Nov 2018
A Convergence Theory for Deep Learning via Over-Parameterization
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
AI4CE
ODL
275
1,469
0
09 Nov 2018
Gradient Descent Finds Global Minima of Deep Neural Networks
S. Du
Jason D. Lee
Haochuan Li
Liwei Wang
Masayoshi Tomizuka
ODL
229
1,136
0
09 Nov 2018
Rademacher Complexity for Adversarially Robust Generalization
Dong Yin
Kannan Ramchandran
Peter L. Bartlett
AAML
99
261
0
29 Oct 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
241
1,276
0
04 Oct 2018
Gradient descent aligns the layers of deep linear networks
Ziwei Ji
Matus Telgarsky
123
257
0
04 Oct 2018
Mean Field Analysis of Neural Networks: A Central Limit Theorem
Justin A. Sirignano
K. Spiliopoulos
MLT
89
194
0
28 Aug 2018
Just Interpolate: Kernel "Ridgeless" Regression Can Generalize
Tengyuan Liang
Alexander Rakhlin
89
355
0
01 Aug 2018
Does data interpolation contradict statistical optimality?
M. Belkin
Alexander Rakhlin
Alexandre B. Tsybakov
93
221
0
25 Jun 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
275
3,223
0
20 Jun 2018
Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate
M. Belkin
Daniel J. Hsu
P. Mitra
AI4CE
153
259
0
13 Jun 2018
Implicit Bias of Gradient Descent on Linear Convolutional Networks
Suriya Gunasekar
Jason D. Lee
Daniel Soudry
Nathan Srebro
MDE
130
414
0
01 Jun 2018
On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
Lénaïc Chizat
Francis R. Bach
OT
221
737
0
24 May 2018
A Mean Field View of the Landscape of Two-Layers Neural Networks
Song Mei
Andrea Montanari
Phan-Minh Nguyen
MLT
107
863
0
18 Apr 2018
Risk and parameter convergence of logistic regression
Ziwei Ji
Matus Telgarsky
73
130
0
20 Mar 2018
Stronger generalization bounds for deep nets via a compression approach
Sanjeev Arora
Rong Ge
Behnam Neyshabur
Yi Zhang
MLT
AI4CE
93
643
0
14 Feb 2018
Learning One Convolutional Layer with Overlapping Patches
Surbhi Goel
Adam R. Klivans
Raghu Meka
MLT
80
81
0
07 Feb 2018
Visualizing the Loss Landscape of Neural Nets
Hao Li
Zheng Xu
Gavin Taylor
Christoph Studer
Tom Goldstein
266
1,901
0
28 Dec 2017
Size-Independent Sample Complexity of Neural Networks
Noah Golowich
Alexander Rakhlin
Ohad Shamir
157
551
0
18 Dec 2017
The Implicit Bias of Gradient Descent on Separable Data
Daniel Soudry
Elad Hoffer
Mor Shpigel Nacson
Suriya Gunasekar
Nathan Srebro
172
924
0
27 Oct 2017
SGD Learns Over-parameterized Networks that Provably Generalize on Linearly Separable Data
Alon Brutzkus
Amir Globerson
Eran Malach
Shai Shalev-Shwartz
MLT
156
279
0
27 Oct 2017
A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks
Behnam Neyshabur
Srinadh Bhojanapalli
Nathan Srebro
90
610
0
29 Jul 2017
Exploring Generalization in Deep Learning
Behnam Neyshabur
Srinadh Bhojanapalli
David A. McAllester
Nathan Srebro
FAtt
162
1,259
0
27 Jun 2017
Empirical Analysis of the Hessian of Over-Parametrized Neural Networks
Levent Sagun
Utku Evci
V. U. Güney
Yann N. Dauphin
Léon Bottou
58
419
0
14 Jun 2017
Train longer, generalize better: closing the generalization gap in large batch training of neural networks
Elad Hoffer
Itay Hubara
Daniel Soudry
ODL
185
800
0
24 May 2017
Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data
Gintare Karolina Dziugaite
Daniel M. Roy
117
820
0
31 Mar 2017
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
351
4,636
0
10 Nov 2016
Entropy-SGD: Biasing Gradient Descent Into Wide Valleys
Pratik Chaudhari
A. Choromańska
Stefano Soatto
Yann LeCun
Carlo Baldassi
C. Borgs
J. Chayes
Levent Sagun
R. Zecchina
ODL
96
775
0
06 Nov 2016
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
436
2,946
0
15 Sep 2016
A vector-contraction inequality for Rademacher complexities
Andreas Maurer
87
261
0
01 May 2016
Train faster, generalize better: Stability of stochastic gradient descent
Moritz Hardt
Benjamin Recht
Y. Singer
118
1,243
0
03 Sep 2015
A useful variant of the Davis--Kahan theorem for statisticians
Yi Yu
Tengyao Wang
R. Samworth
105
578
0
04 May 2014
1