ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.02696
  4. Cited By
Accelerated, Optimal, and Parallel: Some Results on Model-Based
  Stochastic Optimization

Accelerated, Optimal, and Parallel: Some Results on Model-Based Stochastic Optimization

7 January 2021
Karan N. Chadha
Gary Cheng
John C. Duchi
ArXivPDFHTML

Papers citing "Accelerated, Optimal, and Parallel: Some Results on Model-Based Stochastic Optimization"

6 / 6 papers shown
Title
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Dimitris Oikonomou
Nicolas Loizou
55
4
0
06 Jun 2024
MoMo: Momentum Models for Adaptive Learning Rates
MoMo: Momentum Models for Adaptive Learning Rates
Fabian Schaipp
Ruben Ohana
Michael Eickenberg
Aaron Defazio
Robert Mansel Gower
35
10
0
12 May 2023
Sharper Analysis for Minibatch Stochastic Proximal Point Methods:
  Stability, Smoothness, and Deviation
Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation
Xiao-Tong Yuan
P. Li
36
2
0
09 Jan 2023
Private optimization in the interpolation regime: faster rates and
  hardness results
Private optimization in the interpolation regime: faster rates and hardness results
Hilal Asi
Karan N. Chadha
Gary Cheng
John C. Duchi
47
5
0
31 Oct 2022
Convergence and Stability of the Stochastic Proximal Point Algorithm
  with Momentum
Convergence and Stability of the Stochastic Proximal Point Algorithm with Momentum
J. Kim
Panos Toulis
Anastasios Kyrillidis
26
8
0
11 Nov 2021
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
182
683
0
07 Dec 2010
1