ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.07571
  4. Cited By
Two models of double descent for weak features

Two models of double descent for weak features

18 March 2019
M. Belkin
Daniel J. Hsu
Ji Xu
ArXivPDFHTML

Papers citing "Two models of double descent for weak features"

12 / 262 papers shown
Title
MaxiMin Active Learning in Overparameterized Model Classes}
MaxiMin Active Learning in Overparameterized Model Classes}
Mina Karzand
Robert D. Nowak
14
20
0
29 May 2019
Implicit Rugosity Regularization via Data Augmentation
Implicit Rugosity Regularization via Data Augmentation
Daniel LeJeune
Randall Balestriero
Hamid Javadi
Richard G. Baraniuk
12
4
0
28 May 2019
Empirical Risk Minimization in the Interpolating Regime with Application
  to Neural Network Learning
Empirical Risk Minimization in the Interpolating Regime with Application to Neural Network Learning
Nicole Mücke
Ingo Steinwart
AI4CE
19
2
0
25 May 2019
Linearized two-layers neural networks in high dimension
Linearized two-layers neural networks in high dimension
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
18
241
0
27 Apr 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
R. Tibshirani
36
728
0
19 Mar 2019
Consistent Risk Estimation in Moderately High-Dimensional Linear
  Regression
Consistent Risk Estimation in Moderately High-Dimensional Linear Regression
Ji Xu
A. Maleki
Kamiar Rahnama Rad
Daniel J. Hsu
11
13
0
05 Feb 2019
Scaling description of generalization with number of parameters in deep
  learning
Scaling description of generalization with number of parameters in deep learning
Mario Geiger
Arthur Jacot
S. Spigler
Franck Gabriel
Levent Sagun
Stéphane dÁscoli
Giulio Biroli
Clément Hongler
M. Wyart
52
195
0
06 Jan 2019
Regularized Zero-Variance Control Variates
Regularized Zero-Variance Control Variates
Leah F. South
Chris J. Oates
Antonietta Mira
Christopher C. Drovandi
BDL
22
19
0
13 Nov 2018
A jamming transition from under- to over-parametrization affects loss
  landscape and generalization
A jamming transition from under- to over-parametrization affects loss landscape and generalization
S. Spigler
Mario Geiger
Stéphane dÁscoli
Levent Sagun
Giulio Biroli
M. Wyart
33
151
0
22 Oct 2018
A Modern Take on the Bias-Variance Tradeoff in Neural Networks
A Modern Take on the Bias-Variance Tradeoff in Neural Networks
Brady Neal
Sarthak Mittal
A. Baratin
Vinayak Tantia
Matthew Scicluna
Simon Lacoste-Julien
Ioannis Mitliagkas
37
167
0
19 Oct 2018
Optimal ridge penalty for real-world high-dimensional data can be zero
  or negative due to the implicit ridge regularization
Optimal ridge penalty for real-world high-dimensional data can be zero or negative due to the implicit ridge regularization
D. Kobak
Jonathan Lomond
Benoit Sanchez
35
89
0
28 May 2018
High-dimensional dynamics of generalization error in neural networks
High-dimensional dynamics of generalization error in neural networks
Madhu S. Advani
Andrew M. Saxe
AI4CE
90
464
0
10 Oct 2017
Previous
123456