ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.10442
  4. Cited By
Analysis of Overparameterization in Continual Learning under a Linear Model

Analysis of Overparameterization in Continual Learning under a Linear Model

11 February 2025
Daniel Goldfarb
Paul Hand
    CLL
ArXiv (abs)PDFHTML

Papers citing "Analysis of Overparameterization in Continual Learning under a Linear Model"

15 / 15 papers shown
Title
Continual Learning in Linear Classification on Separable Data
Continual Learning in Linear Classification on Separable Data
Itay Evron
E. Moroshko
G. Buzaglo
M. Khriesh
B. Marjieh
Nathan Srebro
Daniel Soudry
CLL
67
17
0
06 Jun 2023
Theory on Forgetting and Generalization of Continual Learning
Theory on Forgetting and Generalization of Continual Learning
Sen Lin
Peizhong Ju
Yitao Liang
Ness B. Shroff
CLL
79
45
0
12 Feb 2023
Wide Neural Networks Forget Less Catastrophically
Wide Neural Networks Forget Less Catastrophically
Seyed Iman Mirzadeh
Arslan Chaudhry
Dong Yin
Huiyi Hu
Razvan Pascanu
Dilan Görür
Mehrdad Farajtabar
CLL
74
67
0
21 Oct 2021
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of
  Overparameterized Machine Learning
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
88
72
0
06 Sep 2021
Understanding Double Descent Requires a Fine-Grained Bias-Variance
  Decomposition
Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Ben Adlam
Jeffrey Pennington
UD
89
93
0
04 Nov 2020
Deep Double Descent: Where Bigger Models and More Data Hurt
Deep Double Descent: Where Bigger Models and More Data Hurt
Preetum Nakkiran
Gal Kaplun
Yamini Bansal
Tristan Yang
Boaz Barak
Ilya Sutskever
121
945
0
04 Dec 2019
Orthogonal Gradient Descent for Continual Learning
Orthogonal Gradient Descent for Continual Learning
Mehrdad Farajtabar
Navid Azizan
Alex Mott
Ang Li
CLL
99
373
0
15 Oct 2019
Benign Overfitting in Linear Regression
Benign Overfitting in Linear Regression
Peter L. Bartlett
Philip M. Long
Gábor Lugosi
Alexander Tsigler
MLT
88
778
0
26 Jun 2019
Implicit regularization for deep neural networks driven by an
  Ornstein-Uhlenbeck like process
Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process
Guy Blanc
Neha Gupta
Gregory Valiant
Paul Valiant
143
146
0
19 Apr 2019
Harmless interpolation of noisy data in regression
Harmless interpolation of noisy data in regression
Vidya Muthukumar
Kailas Vodrahalli
Vignesh Subramanian
A. Sahai
82
202
0
21 Mar 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
194
746
0
19 Mar 2019
Two models of double descent for weak features
Two models of double descent for weak features
M. Belkin
Daniel J. Hsu
Ji Xu
102
375
0
18 Mar 2019
Continual Learning with Deep Generative Replay
Continual Learning with Deep Generative Replay
Hanul Shin
Jung Kwon Lee
Jaehong Kim
Jiwon Kim
KELMCLL
80
2,080
0
24 May 2017
Overcoming catastrophic forgetting in neural networks
Overcoming catastrophic forgetting in neural networks
J. Kirkpatrick
Razvan Pascanu
Neil C. Rabinowitz
J. Veness
Guillaume Desjardins
...
A. Grabska-Barwinska
Demis Hassabis
Claudia Clopath
D. Kumaran
R. Hadsell
CLL
372
7,547
0
02 Dec 2016
Learning without Forgetting
Learning without Forgetting
Zhizhong Li
Derek Hoiem
CLLOODSSL
304
4,428
0
29 Jun 2016
1