Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1902.08668
Cited By
Beating SGD Saturation with Tail-Averaging and Minibatching
22 February 2019
Nicole Mücke
Gergely Neu
Lorenzo Rosasco
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Beating SGD Saturation with Tail-Averaging and Minibatching"
9 / 9 papers shown
Title
Regularized least squares learning with heavy-tailed noise is minimax optimal
Mattes Mollenhauer
Nicole Mücke
Dimitri Meunier
Arthur Gretton
50
0
0
20 May 2025
The Implicit Regularization of Stochastic Gradient Flow for Least Squares
Alnur Ali
Yan Sun
Robert Tibshirani
61
77
0
17 Mar 2020
Introduction to Online Convex Optimization
Elad Hazan
OffRL
138
1,927
0
07 Sep 2019
Iterate averaging as regularization for stochastic gradient descent
Gergely Neu
Lorenzo Rosasco
MoMe
71
61
0
22 Feb 2018
Optimal Rates For Regularization Of Statistical Inverse Learning Problems
Gilles Blanchard
Nicole Mücke
433
143
0
14 Apr 2016
Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n)
Francis R. Bach
Eric Moulines
87
405
0
10 Jun 2013
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
146
574
0
08 Dec 2012
Better Mini-Batch Algorithms via Accelerated Gradient Methods
Andrew Cotter
Ohad Shamir
Nathan Srebro
Karthik Sridharan
ODL
112
313
0
22 Jun 2011
Online Learning as Stochastic Approximation of Regularization Paths
P. Tarres
Yuan Yao
68
94
0
29 Mar 2011
1