ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.03159
  4. Cited By
Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation
  Regime

Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation Regime

7 March 2022
Difan Zou
Jingfeng Wu
Vladimir Braverman
Quanquan Gu
Sham Kakade
ArXiv (abs)PDFHTML

Papers citing "Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation Regime"

13 / 13 papers shown
Title
SGD with shuffling: optimal rates without component convexity and large
  epoch requirements
SGD with shuffling: optimal rates without component convexity and large epoch requirements
Kwangjun Ahn
Chulhee Yun
S. Sra
57
67
0
12 Jun 2020
On the Optimal Weighted $\ell_2$ Regularization in Overparameterized
  Linear Regression
On the Optimal Weighted ℓ2\ell_2ℓ2​ Regularization in Overparameterized Linear Regression
Denny Wu
Ji Xu
70
123
0
10 Jun 2020
Painless Stochastic Gradient: Interpolation, Line-Search, and
  Convergence Rates
Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates
Sharan Vaswani
Aaron Mishkin
I. Laradji
Mark Schmidt
Gauthier Gidel
Simon Lacoste-Julien
ODL
86
210
0
24 May 2019
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning
  Rate Procedure For Least Squares
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure For Least Squares
Rong Ge
Sham Kakade
Rahul Kidambi
Praneeth Netrapalli
100
155
0
29 Apr 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
Robert Tibshirani
194
746
0
19 Mar 2019
Beating SGD Saturation with Tail-Averaging and Minibatching
Beating SGD Saturation with Tail-Averaging and Minibatching
Nicole Mücke
Gergely Neu
Lorenzo Rosasco
86
37
0
22 Feb 2019
Fast and Faster Convergence of SGD for Over-Parameterized Models and an
  Accelerated Perceptron
Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron
Sharan Vaswani
Francis R. Bach
Mark Schmidt
83
299
0
16 Oct 2018
Random Shuffling Beats SGD after Finite Epochs
Random Shuffling Beats SGD after Finite Epochs
Jeff Z. HaoChen
S. Sra
57
99
0
26 Jun 2018
Iterate averaging as regularization for stochastic gradient descent
Iterate averaging as regularization for stochastic gradient descent
Gergely Neu
Lorenzo Rosasco
MoMe
76
61
0
22 Feb 2018
The Power of Interpolation: Understanding the Effectiveness of SGD in
  Modern Over-parametrized Learning
The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning
Siyuan Ma
Raef Bassily
M. Belkin
82
289
0
18 Dec 2017
Early stopping for kernel boosting algorithms: A general analysis with
  localized complexities
Early stopping for kernel boosting algorithms: A general analysis with localized complexities
Yuting Wei
Fanny Yang
Martin J. Wainwright
64
77
0
05 Jul 2017
Data-Dependent Stability of Stochastic Gradient Descent
Data-Dependent Stability of Stochastic Gradient Descent
Ilja Kuzborskij
Christoph H. Lampert
MLT
113
166
0
05 Mar 2017
Non-strongly-convex smooth stochastic approximation with convergence
  rate O(1/n)
Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n)
Francis R. Bach
Eric Moulines
96
405
0
10 Jun 2013
1