Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1903.07571
Cited By
Two models of double descent for weak features
18 March 2019
M. Belkin
Daniel J. Hsu
Ji Xu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Two models of double descent for weak features"
50 / 262 papers shown
Title
Random matrix analysis of deep neural network weight matrices
M. Thamm
Max Staats
B. Rosenow
37
12
0
28 Mar 2022
More Than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize
Alexander Wei
Wei Hu
Jacob Steinhardt
17
69
0
11 Mar 2022
Bias-variance decomposition of overparameterized regression with random linear features
J. Rocks
Pankaj Mehta
22
12
0
10 Mar 2022
Estimation under Model Misspecification with Fake Features
Martin Hellkvist
Ayça Özçelikkale
Anders Ahlén
27
11
0
07 Mar 2022
Model Comparison and Calibration Assessment: User Guide for Consistent Scoring Functions in Machine Learning and Actuarial Practice
Tobias Fissler
Christian Lorentzen
Michael Mayer
17
8
0
25 Feb 2022
Benefit of Interpolation in Nearest Neighbor Algorithms
Yue Xing
Qifan Song
Guang Cheng
17
28
0
23 Feb 2022
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Ananya Kumar
Aditi Raghunathan
Robbie Jones
Tengyu Ma
Percy Liang
OODD
50
646
0
21 Feb 2022
Memorize to Generalize: on the Necessity of Interpolation in High Dimensional Linear Regression
Chen Cheng
John C. Duchi
Rohith Kuditipudi
21
10
0
20 Feb 2022
On Optimal Early Stopping: Over-informative versus Under-informative Parametrization
Ruoqi Shen
Liyao (Mars) Gao
Yi Ma
14
13
0
20 Feb 2022
Benign Overfitting in Two-layer Convolutional Neural Networks
Yuan Cao
Zixiang Chen
M. Belkin
Quanquan Gu
MLT
19
85
0
14 Feb 2022
Support Vectors and Gradient Dynamics of Single-Neuron ReLU Networks
Sangmin Lee
Byeongsu Sim
Jong Chul Ye
MLT
24
0
0
11 Feb 2022
Parameters or Privacy: A Provable Tradeoff Between Overparameterization and Membership Inference
Jasper Tan
Blake Mason
Hamid Javadi
Richard G. Baraniuk
FedML
40
19
0
02 Feb 2022
A Generalized Weighted Optimization Method for Computational Learning and Inversion
Bjorn Engquist
Kui Ren
Yunan Yang
31
4
0
23 Jan 2022
Benign Overfitting in Adversarially Robust Linear Classification
Jinghui Chen
Yuan Cao
Quanquan Gu
AAML
SILM
34
10
0
31 Dec 2021
Optimistic Rates: A Unifying Theory for Interpolation Learning and Regularization in Linear Regression
Lijia Zhou
Frederic Koehler
Danica J. Sutherland
Nathan Srebro
95
24
0
08 Dec 2021
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning
Yuege Xie
Bobby Shi
Hayden Schaeffer
Rachel A. Ward
85
9
0
07 Dec 2021
A generalization gap estimation for overparameterized models via the Langevin functional variance
Akifumi Okuno
Keisuke Yano
50
1
0
07 Dec 2021
Multi-scale Feature Learning Dynamics: Insights for Double Descent
Mohammad Pezeshki
Amartya Mitra
Yoshua Bengio
Guillaume Lajoie
61
25
0
06 Dec 2021
Approximate Spectral Decomposition of Fisher Information Matrix for Simple ReLU Networks
Yoshinari Takeishi
Masazumi Iida
J. Takeuchi
14
4
0
30 Nov 2021
The Three Stages of Learning Dynamics in High-Dimensional Kernel Methods
Nikhil Ghosh
Song Mei
Bin Yu
33
20
0
13 Nov 2021
Harmless interpolation in regression and classification with structured features
Andrew D. McRae
Santhosh Karnik
Mark A. Davenport
Vidya Muthukumar
104
11
0
09 Nov 2021
PAC-Bayesian Learning of Aggregated Binary Activated Neural Networks with Probabilities over Representations
Louis Fortier-Dubois
Gaël Letarte
Benjamin Leblanc
Franccois Laviolette
Pascal Germain
UQCV
19
0
0
28 Oct 2021
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
A. Bodin
N. Macris
39
13
0
22 Oct 2021
Conditioning of Random Feature Matrices: Double Descent and Generalization Error
Zhijun Chen
Hayden Schaeffer
37
12
0
21 Oct 2021
Data splitting improves statistical performance in overparametrized regimes
Nicole Mücke
Enrico Reiss
Jonas Rungenhagen
Markus Klein
19
7
0
21 Oct 2021
On the Double Descent of Random Features Models Trained with SGD
Fanghui Liu
Johan A. K. Suykens
V. Cevher
MLT
21
10
0
13 Oct 2021
Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression
Jingfeng Wu
Difan Zou
Vladimir Braverman
Quanquan Gu
Sham Kakade
106
20
0
12 Oct 2021
Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective
Adhyyan Narang
Vidya Muthukumar
A. Sahai
SILM
AAML
36
1
0
27 Sep 2021
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
38
71
0
06 Sep 2021
Simple, Fast, and Flexible Framework for Matrix Completion with Infinite Width Neural Networks
Adityanarayanan Radhakrishnan
George Stefanakis
M. Belkin
Caroline Uhler
32
25
0
31 Jul 2021
On the Role of Optimization in Double Descent: A Least Squares Study
Ilja Kuzborskij
Csaba Szepesvári
Omar Rivasplata
Amal Rannen-Triki
Razvan Pascanu
34
10
0
27 Jul 2021
Taxonomizing local versus global structure in neural network loss landscapes
Yaoqing Yang
Liam Hodgkinson
Ryan Theisen
Joe Zou
Joseph E. Gonzalez
Kannan Ramchandran
Michael W. Mahoney
40
36
0
23 Jul 2021
Mitigating deep double descent by concatenating inputs
John Chen
Qihan Wang
Anastasios Kyrillidis
BDL
13
3
0
02 Jul 2021
Predictive Model Degrees of Freedom in Linear Regression
Bo Luan
Yoonkyung Lee
Yunzhang Zhu
11
3
0
29 Jun 2021
A Representation Learning Perspective on the Importance of Train-Validation Splitting in Meta-Learning
Nikunj Saunshi
Arushi Gupta
Wei Hu
SSL
19
17
0
29 Jun 2021
A Mechanism for Producing Aligned Latent Spaces with Autoencoders
Saachi Jain
Adityanarayanan Radhakrishnan
Caroline Uhler
24
9
0
29 Jun 2021
Benign Overfitting in Multiclass Classification: All Roads Lead to Interpolation
Ke Wang
Vidya Muthukumar
Christos Thrampoulidis
28
48
0
21 Jun 2021
Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds, and Benign Overfitting
Frederic Koehler
Lijia Zhou
Danica J. Sutherland
Nathan Srebro
32
56
0
17 Jun 2021
An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks
Shashank Rajput
Kartik K. Sreenivasan
Dimitris Papailiopoulos
Amin Karbasi
9
25
0
14 Jun 2021
Early-stopped neural networks are consistent
Ziwei Ji
Justin D. Li
Matus Telgarsky
22
37
0
10 Jun 2021
Nonasymptotic theory for two-layer neural networks: Beyond the bias-variance trade-off
Huiyuan Wang
Wei Lin
MLT
32
4
0
09 Jun 2021
Double Descent and Other Interpolation Phenomena in GANs
Lorenzo Luzi
Yehuda Dar
Richard Baraniuk
26
5
0
07 Jun 2021
Towards an Understanding of Benign Overfitting in Neural Networks
Zhu Li
Zhi-Hua Zhou
Arthur Gretton
MLT
33
35
0
06 Jun 2021
Double Descent Optimization Pattern and Aliasing: Caveats of Noisy Labels
Florian Dubost
Erin Hong
Max Pike
Siddharth Sharma
Siyi Tang
Nandita Bhaskhar
Christopher Lee-Messer
D. Rubin
NoLa
37
0
0
03 Jun 2021
Optimization Variance: Exploring Generalization Properties of DNNs
Xiao Zhang
Dongrui Wu
Haoyi Xiong
Bo Dai
23
4
0
03 Jun 2021
Generalization Error Rates in Kernel Regression: The Crossover from the Noiseless to Noisy Regime
Hugo Cui
Bruno Loureiro
Florent Krzakala
Lenka Zdeborová
40
82
0
31 May 2021
Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
M. Belkin
14
183
0
29 May 2021
Support vector machines and linear regression coincide with very high-dimensional features
Navid Ardeshir
Clayton Sanford
Daniel J. Hsu
20
21
0
28 May 2021
Model Mismatch Trade-offs in LMMSE Estimation
Martin Hellkvist
Ayça Özçelikkale
8
3
0
25 May 2021
A Precise Performance Analysis of Support Vector Regression
Houssem Sifaou
A. Kammoun
Mohamed-Slim Alouini
13
3
0
21 May 2021
Previous
1
2
3
4
5
6
Next