ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.07242
  4. Cited By
More Data Can Hurt for Linear Regression: Sample-wise Double Descent

More Data Can Hurt for Linear Regression: Sample-wise Double Descent

16 December 2019
Preetum Nakkiran
ArXivPDFHTML

Papers citing "More Data Can Hurt for Linear Regression: Sample-wise Double Descent"

18 / 18 papers shown
Title
The Double Descent Behavior in Two Layer Neural Network for Binary Classification
The Double Descent Behavior in Two Layer Neural Network for Binary Classification
Chathurika S Abeykoon
A. Beknazaryan
Hailin Sang
53
1
0
27 Apr 2025
How more data can hurt: Instability and regularization in next-generation reservoir computing
How more data can hurt: Instability and regularization in next-generation reservoir computing
Yuanzhao Zhang
Edmilson Roque dos Santos
Sean P. Cornelius
85
2
0
28 Jan 2025
Gibbs-Based Information Criteria and the Over-Parameterized Regime
Gibbs-Based Information Criteria and the Over-Parameterized Regime
Haobo Chen
Yuheng Bu
Greg Wornell
27
1
0
08 Jun 2023
Collaborative Development of NLP models
Collaborative Development of NLP models
Fereshte Khani
Marco Tulio Ribeiro
32
2
0
20 May 2023
High Dimensional Binary Classification under Label Shift: Phase
  Transition and Regularization
High Dimensional Binary Classification under Label Shift: Phase Transition and Regularization
Jiahui Cheng
Minshuo Chen
Hao Liu
Tuo Zhao
Wenjing Liao
36
0
0
01 Dec 2022
A Survey of Learning Curves with Bad Behavior: or How More Data Need Not
  Lead to Better Performance
A Survey of Learning Curves with Bad Behavior: or How More Data Need Not Lead to Better Performance
Marco Loog
T. Viering
26
1
0
25 Nov 2022
Information FOMO: The unhealthy fear of missing out on information. A
  method for removing misleading data for healthier models
Information FOMO: The unhealthy fear of missing out on information. A method for removing misleading data for healthier models
Ethan Pickering
T. Sapsis
24
6
0
27 Aug 2022
What Can Transformers Learn In-Context? A Case Study of Simple Function
  Classes
What Can Transformers Learn In-Context? A Case Study of Simple Function Classes
Shivam Garg
Dimitris Tsipras
Percy Liang
Gregory Valiant
26
449
0
01 Aug 2022
Regularization-wise double descent: Why it occurs and how to eliminate
  it
Regularization-wise double descent: Why it occurs and how to eliminate it
Fatih Yilmaz
Reinhard Heckel
30
11
0
03 Jun 2022
Contrasting random and learned features in deep Bayesian linear
  regression
Contrasting random and learned features in deep Bayesian linear regression
Jacob A. Zavatone-Veth
William L. Tong
Cengiz Pehlevan
BDL
MLT
28
26
0
01 Mar 2022
Differentially Private Regression with Unbounded Covariates
Differentially Private Regression with Unbounded Covariates
Jason Milionis
Alkis Kalavasis
Dimitris Fotakis
Stratis Ioannidis
23
10
0
19 Feb 2022
Classification and Adversarial examples in an Overparameterized Linear
  Model: A Signal Processing Perspective
Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective
Adhyyan Narang
Vidya Muthukumar
A. Sahai
SILM
AAML
36
1
0
27 Sep 2021
The Shape of Learning Curves: a Review
The Shape of Learning Curves: a Review
T. Viering
Marco Loog
18
122
0
19 Mar 2021
Memorizing without overfitting: Bias, variance, and interpolation in
  over-parameterized models
Memorizing without overfitting: Bias, variance, and interpolation in over-parameterized models
J. Rocks
Pankaj Mehta
18
41
0
26 Oct 2020
Classification vs regression in overparameterized regimes: Does the loss
  function matter?
Classification vs regression in overparameterized regimes: Does the loss function matter?
Vidya Muthukumar
Adhyyan Narang
Vignesh Subramanian
M. Belkin
Daniel J. Hsu
A. Sahai
41
149
0
16 May 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
93
152
0
02 Mar 2020
A Model of Double Descent for High-dimensional Binary Linear
  Classification
A Model of Double Descent for High-dimensional Binary Linear Classification
Zeyu Deng
A. Kammoun
Christos Thrampoulidis
36
144
0
13 Nov 2019
Optimal ridge penalty for real-world high-dimensional data can be zero
  or negative due to the implicit ridge regularization
Optimal ridge penalty for real-world high-dimensional data can be zero or negative due to the implicit ridge regularization
D. Kobak
Jonathan Lomond
Benoit Sanchez
30
89
0
28 May 2018
1