ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1610.03774
  4. Cited By
Parallelizing Stochastic Gradient Descent for Least Squares Regression:
  mini-batching, averaging, and model misspecification

Parallelizing Stochastic Gradient Descent for Least Squares Regression: mini-batching, averaging, and model misspecification

12 October 2016
Prateek Jain
Sham Kakade
Rahul Kidambi
Praneeth Netrapalli
Aaron Sidford
    MoMe
ArXivPDFHTML

Papers citing "Parallelizing Stochastic Gradient Descent for Least Squares Regression: mini-batching, averaging, and model misspecification"

8 / 8 papers shown
Title
On the Theory of Policy Gradient Methods: Optimality, Approximation, and
  Distribution Shift
On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift
Alekh Agarwal
Sham Kakade
J. Lee
G. Mahajan
11
315
0
01 Aug 2019
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning
  Rate Procedure For Least Squares
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure For Least Squares
Rong Ge
Sham Kakade
Rahul Kidambi
Praneeth Netrapalli
23
149
0
29 Apr 2019
The Effect of Network Width on the Performance of Large-batch Training
The Effect of Network Width on the Performance of Large-batch Training
Lingjiao Chen
Hongyi Wang
Jinman Zhao
Dimitris Papailiopoulos
Paraschos Koutris
13
22
0
11 Jun 2018
On the insufficiency of existing momentum schemes for Stochastic
  Optimization
On the insufficiency of existing momentum schemes for Stochastic Optimization
Rahul Kidambi
Praneeth Netrapalli
Prateek Jain
Sham Kakade
ODL
19
117
0
15 Mar 2018
Iterate averaging as regularization for stochastic gradient descent
Iterate averaging as regularization for stochastic gradient descent
Gergely Neu
Lorenzo Rosasco
MoMe
32
61
0
22 Feb 2018
Exponential convergence of testing error for stochastic gradient methods
Exponential convergence of testing error for stochastic gradient methods
Loucas Pillaud-Vivien
Alessandro Rudi
Francis R. Bach
24
31
0
13 Dec 2017
Stochastic Composite Least-Squares Regression with convergence rate
  O(1/n)
Stochastic Composite Least-Squares Regression with convergence rate O(1/n)
Nicolas Flammarion
Francis R. Bach
24
27
0
21 Feb 2017
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
177
683
0
07 Dec 2010
1