ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.02720
  4. Cited By
An Even More Optimal Stochastic Optimization Algorithm: Minibatching and
  Interpolation Learning

An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning

4 June 2021
Blake E. Woodworth
Nathan Srebro
ArXivPDFHTML

Papers citing "An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning"

5 / 5 papers shown
Title
First Order Methods with Markovian Noise: from Acceleration to
  Variational Inequalities
First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities
Aleksandr Beznosikov
S. Samsonov
Marina Sheshukova
Alexander Gasnikov
A. Naumov
Eric Moulines
52
14
0
25 May 2023
Exploring Local Norms in Exp-concave Statistical Learning
Exploring Local Norms in Exp-concave Statistical Learning
Nikita Puchkin
Nikita Zhivotovskiy
86
2
0
21 Feb 2023
Sharper Analysis for Minibatch Stochastic Proximal Point Methods:
  Stability, Smoothness, and Deviation
Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation
Xiao-Tong Yuan
P. Li
41
2
0
09 Jan 2023
Private optimization in the interpolation regime: faster rates and
  hardness results
Private optimization in the interpolation regime: faster rates and hardness results
Hilal Asi
Karan N. Chadha
Gary Cheng
John C. Duchi
49
5
0
31 Oct 2022
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
182
683
0
07 Dec 2010
1