ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.08668
  4. Cited By
Beating SGD Saturation with Tail-Averaging and Minibatching

Beating SGD Saturation with Tail-Averaging and Minibatching

22 February 2019
Nicole Mücke
Gergely Neu
Lorenzo Rosasco
ArXivPDFHTML

Papers citing "Beating SGD Saturation with Tail-Averaging and Minibatching"

9 / 9 papers shown
Title
Regularized least squares learning with heavy-tailed noise is minimax optimal
Regularized least squares learning with heavy-tailed noise is minimax optimal
Mattes Mollenhauer
Nicole Mücke
Dimitri Meunier
Arthur Gretton
50
0
0
20 May 2025
The Implicit Regularization of Stochastic Gradient Flow for Least
  Squares
The Implicit Regularization of Stochastic Gradient Flow for Least Squares
Alnur Ali
Yan Sun
Robert Tibshirani
61
77
0
17 Mar 2020
Introduction to Online Convex Optimization
Introduction to Online Convex Optimization
Elad Hazan
OffRL
138
1,927
0
07 Sep 2019
Iterate averaging as regularization for stochastic gradient descent
Iterate averaging as regularization for stochastic gradient descent
Gergely Neu
Lorenzo Rosasco
MoMe
71
61
0
22 Feb 2018
Optimal Rates For Regularization Of Statistical Inverse Learning
  Problems
Optimal Rates For Regularization Of Statistical Inverse Learning Problems
Gilles Blanchard
Nicole Mücke
433
143
0
14 Apr 2016
Non-strongly-convex smooth stochastic approximation with convergence
  rate O(1/n)
Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n)
Francis R. Bach
Eric Moulines
87
405
0
10 Jun 2013
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
146
574
0
08 Dec 2012
Better Mini-Batch Algorithms via Accelerated Gradient Methods
Better Mini-Batch Algorithms via Accelerated Gradient Methods
Andrew Cotter
Ohad Shamir
Nathan Srebro
Karthik Sridharan
ODL
112
313
0
22 Jun 2011
Online Learning as Stochastic Approximation of Regularization Paths
Online Learning as Stochastic Approximation of Regularization Paths
P. Tarres
Yuan Yao
68
94
0
29 Mar 2011
1