ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.09344
  4. Cited By
Stochastic Approximation of Smooth and Strongly Convex Functions: Beyond
  the $O(1/T)$ Convergence Rate

Stochastic Approximation of Smooth and Strongly Convex Functions: Beyond the O(1/T)O(1/T)O(1/T) Convergence Rate

27 January 2019
Lijun Zhang
Zhi Zhou
ArXiv (abs)PDFHTML

Papers citing "Stochastic Approximation of Smooth and Strongly Convex Functions: Beyond the $O(1/T)$ Convergence Rate"

12 / 12 papers shown
Title
Exploring Local Norms in Exp-concave Statistical Learning
Exploring Local Norms in Exp-concave Statistical Learning
Nikita Puchkin
Nikita Zhivotovskiy
138
2
0
21 Feb 2023
Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for
  Online Convex Optimization
Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization
Peng Zhao
Yu Zhang
Lijun Zhang
Zhi Zhou
110
50
0
29 Dec 2021
Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic
  Gradient Descent
Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic Gradient Descent
Sharan Vaswani
Benjamin Dubois-Taine
Reza Babanezhad
98
13
0
21 Oct 2021
Stability and Generalization for Randomized Coordinate Descent
Stability and Generalization for Randomized Coordinate Descent
Puyu Wang
Liang Wu
Yunwen Lei
58
7
0
17 Aug 2021
Improved Learning Rates for Stochastic Optimization: Two Theoretical
  Viewpoints
Improved Learning Rates for Stochastic Optimization: Two Theoretical Viewpoints
Shaojie Li
Yong Liu
103
13
0
19 Jul 2021
Learning Under Delayed Feedback: Implicitly Adapting to Gradient Delays
Learning Under Delayed Feedback: Implicitly Adapting to Gradient Delays
R. Aviv
Ido Hakimi
Assaf Schuster
Kfir Y. Levy
49
9
0
23 Jun 2021
An Even More Optimal Stochastic Optimization Algorithm: Minibatching and
  Interpolation Learning
An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning
Blake E. Woodworth
Nathan Srebro
62
22
0
04 Jun 2021
Stability and Deviation Optimal Risk Bounds with Convergence Rate
  $O(1/n)$
Stability and Deviation Optimal Risk Bounds with Convergence Rate O(1/n)O(1/n)O(1/n)
Yegor Klochkov
Nikita Zhivotovskiy
81
62
0
22 Mar 2021
Towards Optimal Problem Dependent Generalization Error Bounds in
  Statistical Learning Theory
Towards Optimal Problem Dependent Generalization Error Bounds in Statistical Learning Theory
Yunbei Xu
A. Zeevi
116
17
0
12 Nov 2020
Fine-Grained Analysis of Stability and Generalization for Stochastic
  Gradient Descent
Fine-Grained Analysis of Stability and Generalization for Stochastic Gradient Descent
Yunwen Lei
Yiming Ying
MLT
99
129
0
15 Jun 2020
Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast
  Convergence
Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast Convergence
Nicolas Loizou
Sharan Vaswani
I. Laradji
Simon Lacoste-Julien
105
188
0
24 Feb 2020
Memorized Sparse Backpropagation
Memorized Sparse Backpropagation
Zhiyuan Zhang
Pengcheng Yang
Xuancheng Ren
Qi Su
Xu Sun
83
13
0
24 May 2019
1