ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.10074
  4. Cited By
Statistical Optimality of Stochastic Gradient Descent on Hard Learning
  Problems through Multiple Passes

Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes

25 May 2018
Loucas Pillaud-Vivien
Alessandro Rudi
Francis R. Bach
ArXivPDFHTML

Papers citing "Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes"

24 / 24 papers shown
Title
The Optimization Landscape of SGD Across the Feature Learning Strength
The Optimization Landscape of SGD Across the Feature Learning Strength
Alexander B. Atanasov
Alexandru Meterez
James B. Simon
Cengiz Pehlevan
50
2
0
06 Oct 2024
How Feature Learning Can Improve Neural Scaling Laws
How Feature Learning Can Improve Neural Scaling Laws
Blake Bordelon
Alexander B. Atanasov
Cengiz Pehlevan
57
12
0
26 Sep 2024
Scaling Laws in Linear Regression: Compute, Parameters, and Data
Scaling Laws in Linear Regression: Compute, Parameters, and Data
Licong Lin
Jingfeng Wu
Sham Kakade
Peter L. Bartlett
Jason D. Lee
LRM
44
15
0
12 Jun 2024
Spectral Algorithms on Manifolds through Diffusion
Spectral Algorithms on Manifolds through Diffusion
Weichun Xia
Lei Shi
16
1
0
06 Mar 2024
Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic
  Gradient Descent
Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic Gradient Descent
Lingjiong Zhu
Mert Gurbuzbalaban
Anant Raj
Umut Simsekli
34
6
0
20 May 2023
Optimality of Robust Online Learning
Optimality of Robust Online Learning
Zheng-Chu Guo
A. Christmann
Lei Shi
29
9
0
20 Apr 2023
On the Optimality of Misspecified Spectral Algorithms
On the Optimality of Misspecified Spectral Algorithms
Hao Zhang
Yicheng Li
Qian Lin
18
15
0
27 Mar 2023
Learning Lipschitz Functions by GD-trained Shallow Overparameterized
  ReLU Neural Networks
Learning Lipschitz Functions by GD-trained Shallow Overparameterized ReLU Neural Networks
Ilja Kuzborskij
Csaba Szepesvári
21
4
0
28 Dec 2022
Online Regularized Learning Algorithm for Functional Data
Online Regularized Learning Algorithm for Functional Data
Yuan Mao
Zheng-Chu Guo
21
4
0
24 Nov 2022
Statistical Optimality of Divide and Conquer Kernel-based Functional
  Linear Regression
Statistical Optimality of Divide and Conquer Kernel-based Functional Linear Regression
Jiading Liu
Lei Shi
30
9
0
20 Nov 2022
Provable Generalization of Overparameterized Meta-learning Trained with
  SGD
Provable Generalization of Overparameterized Meta-learning Trained with SGD
Yu Huang
Yingbin Liang
Longbo Huang
MLT
28
8
0
18 Jun 2022
Active Labeling: Streaming Stochastic Gradients
Active Labeling: Streaming Stochastic Gradients
Vivien A. Cabannes
Francis R. Bach
Vianney Perchet
Alessandro Rudi
61
2
0
26 May 2022
Sobolev Acceleration and Statistical Optimality for Learning Elliptic
  Equations via Gradient Descent
Sobolev Acceleration and Statistical Optimality for Learning Elliptic Equations via Gradient Descent
Yiping Lu
Jose H. Blanchet
Lexing Ying
38
7
0
15 May 2022
The Directional Bias Helps Stochastic Gradient Descent to Generalize in
  Kernel Regression Models
The Directional Bias Helps Stochastic Gradient Descent to Generalize in Kernel Regression Models
Yiling Luo
X. Huo
Y. Mei
21
0
0
29 Apr 2022
On the Benefits of Large Learning Rates for Kernel Methods
On the Benefits of Large Learning Rates for Kernel Methods
Gaspard Beugnot
Julien Mairal
Alessandro Rudi
27
11
0
28 Feb 2022
Personalization Improves Privacy-Accuracy Tradeoffs in Federated
  Learning
Personalization Improves Privacy-Accuracy Tradeoffs in Federated Learning
A. Bietti
Chen-Yu Wei
Miroslav Dudík
John Langford
Zhiwei Steven Wu
FedML
24
43
0
10 Feb 2022
Improved Learning Rates for Stochastic Optimization: Two Theoretical
  Viewpoints
Improved Learning Rates for Stochastic Optimization: Two Theoretical Viewpoints
Shaojie Li
Yong Liu
26
13
0
19 Jul 2021
Learning curves of generic features maps for realistic datasets with a
  teacher-student model
Learning curves of generic features maps for realistic datasets with a teacher-student model
Bruno Loureiro
Cédric Gerbelot
Hugo Cui
Sebastian Goldt
Florent Krzakala
M. Mézard
Lenka Zdeborová
35
135
0
16 Feb 2021
Fast rates in structured prediction
Fast rates in structured prediction
Vivien A. Cabannes
Alessandro Rudi
Francis R. Bach
20
19
0
01 Feb 2021
Kernel Methods for Causal Functions: Dose, Heterogeneous, and
  Incremental Response Curves
Kernel Methods for Causal Functions: Dose, Heterogeneous, and Incremental Response Curves
Rahul Singh
Liyuan Xu
Arthur Gretton
OffRL
63
27
0
10 Oct 2020
When Does Preconditioning Help or Hurt Generalization?
When Does Preconditioning Help or Hurt Generalization?
S. Amari
Jimmy Ba
Roger C. Grosse
Xuechen Li
Atsushi Nitanda
Taiji Suzuki
Denny Wu
Ji Xu
36
32
0
18 Jun 2020
Fine-Grained Analysis of Stability and Generalization for Stochastic
  Gradient Descent
Fine-Grained Analysis of Stability and Generalization for Stochastic Gradient Descent
Yunwen Lei
Yiming Ying
MLT
35
126
0
15 Jun 2020
High probability generalization bounds for uniformly stable algorithms
  with nearly optimal rate
High probability generalization bounds for uniformly stable algorithms with nearly optimal rate
Vitaly Feldman
J. Vondrák
35
154
0
27 Feb 2019
Sobolev Norm Learning Rates for Regularized Least-Squares Algorithm
Sobolev Norm Learning Rates for Regularized Least-Squares Algorithm
Simon Fischer
Ingo Steinwart
13
148
0
23 Feb 2017
1