ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.01851
  4. Cited By
On the Universality of the Double Descent Peak in Ridgeless Regression
v1v2v3v4v5v6v7v8 (latest)

On the Universality of the Double Descent Peak in Ridgeless Regression

5 October 2020
David Holzmüller
ArXiv (abs)PDFHTML

Papers citing "On the Universality of the Double Descent Peak in Ridgeless Regression"

15 / 15 papers shown
Title
The Neural Tangent Kernel in High Dimensions: Triple Descent and a
  Multi-Scale Theory of Generalization
The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization
Ben Adlam
Jeffrey Pennington
49
125
0
15 Aug 2020
Multiple Descent: Design Your Own Generalization Curve
Multiple Descent: Design Your Own Generalization Curve
Lin Chen
Yifei Min
M. Belkin
Amin Karbasi
DRL
81
61
0
03 Aug 2020
More Data Can Hurt for Linear Regression: Sample-wise Double Descent
More Data Can Hurt for Linear Regression: Sample-wise Double Descent
Preetum Nakkiran
50
68
0
16 Dec 2019
Mish: A Self Regularized Non-Monotonic Activation Function
Mish: A Self Regularized Non-Monotonic Activation Function
Diganta Misra
71
680
0
23 Aug 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
194
743
0
19 Mar 2019
Reconciling modern machine learning practice and the bias-variance
  trade-off
Reconciling modern machine learning practice and the bias-variance trade-off
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
240
1,650
0
28 Dec 2018
A Modern Take on the Bias-Variance Tradeoff in Neural Networks
A Modern Take on the Bias-Variance Tradeoff in Neural Networks
Brady Neal
Sarthak Mittal
A. Baratin
Vinayak Tantia
Matthew Scicluna
Simon Lacoste-Julien
Ioannis Mitliagkas
80
167
0
19 Oct 2018
A VEST of the Pseudoinverse Learning Algorithm
A VEST of the Pseudoinverse Learning Algorithm
Ping Guo
30
20
0
20 May 2018
To understand deep learning we need to understand kernel learning
To understand deep learning we need to understand kernel learning
M. Belkin
Siyuan Ma
Soumik Mandal
65
419
0
05 Feb 2018
Self-Normalizing Neural Networks
Self-Normalizing Neural Networks
Günter Klambauer
Thomas Unterthiner
Andreas Mayr
Sepp Hochreiter
460
2,516
0
08 Jun 2017
Sigmoid-Weighted Linear Units for Neural Network Function Approximation
  in Reinforcement Learning
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
Stefan Elfwing
E. Uchibe
Kenji Doya
133
1,728
0
10 Feb 2017
Understanding deep learning requires rethinking generalization
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
342
4,629
0
10 Nov 2016
Gaussian Error Linear Units (GELUs)
Gaussian Error Linear Units (GELUs)
Dan Hendrycks
Kevin Gimpel
172
5,011
0
27 Jun 2016
Fast and Accurate Deep Network Learning by Exponential Linear Units
  (ELUs)
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
Djork-Arné Clevert
Thomas Unterthiner
Sepp Hochreiter
300
5,524
0
23 Nov 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
  ImageNet Classification
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
326
18,625
0
06 Feb 2015
1