ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.15412
  4. Cited By
Stochastic Mirror Descent: Convergence Analysis and Adaptive Variants
  via the Mirror Stochastic Polyak Stepsize

Stochastic Mirror Descent: Convergence Analysis and Adaptive Variants via the Mirror Stochastic Polyak Stepsize

28 October 2021
Ryan DÓrazio
Nicolas Loizou
I. Laradji
Ioannis Mitliagkas
ArXivPDFHTML

Papers citing "Stochastic Mirror Descent: Convergence Analysis and Adaptive Variants via the Mirror Stochastic Polyak Stepsize"

18 / 18 papers shown
Title
Primal-dual algorithm for contextual stochastic combinatorial optimization
Primal-dual algorithm for contextual stochastic combinatorial optimization
Louis Bouvier
Thibault Prunet
Vincent Leclère
Axel Parmentier
35
0
0
07 May 2025
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Dimitris Oikonomou
Nicolas Loizou
55
4
0
06 Jun 2024
Generalized Exponentiated Gradient Algorithms and Their Application to
  On-Line Portfolio Selection
Generalized Exponentiated Gradient Algorithms and Their Application to On-Line Portfolio Selection
Andrzej Cichocki
S. Cruces
A. Sarmiento
Toshihisa Tanaka
34
2
0
02 Jun 2024
Faster Convergence of Stochastic Accelerated Gradient Descent under Interpolation
Faster Convergence of Stochastic Accelerated Gradient Descent under Interpolation
Aaron Mishkin
Mert Pilanci
Mark Schmidt
62
1
0
03 Apr 2024
Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Sayantan Choudhury
N. Tupitsa
Nicolas Loizou
Samuel Horváth
Martin Takáč
Eduard A. Gorbunov
30
1
0
05 Mar 2024
SANIA: Polyak-type Optimization Framework Leads to Scale Invariant
  Stochastic Algorithms
SANIA: Polyak-type Optimization Framework Leads to Scale Invariant Stochastic Algorithms
Farshed Abdukhakimov
Chulu Xiang
Dmitry Kamzolov
Robert Mansel Gower
Martin Takáč
35
2
0
28 Dec 2023
Fast Minimization of Expected Logarithmic Loss via Stochastic Dual
  Averaging
Fast Minimization of Expected Logarithmic Loss via Stochastic Dual Averaging
C. Tsai
Hao-Chung Cheng
Yen-Huan Li
33
0
0
05 Nov 2023
Locally Adaptive Federated Learning
Locally Adaptive Federated Learning
Sohom Mukherjee
Nicolas Loizou
Sebastian U. Stich
FedML
19
3
0
12 Jul 2023
Decision-Aware Actor-Critic with Function Approximation and Theoretical
  Guarantees
Decision-Aware Actor-Critic with Function Approximation and Theoretical Guarantees
Sharan Vaswani
A. Kazemi
Reza Babanezhad
Nicolas Le Roux
OffRL
27
3
0
24 May 2023
Single-Call Stochastic Extragradient Methods for Structured Non-monotone
  Variational Inequalities: Improved Analysis under Weaker Conditions
Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions
S. Choudhury
Eduard A. Gorbunov
Nicolas Loizou
25
13
0
27 Feb 2023
Low-rank Optimal Transport: Approximation, Statistics and Debiasing
Low-rank Optimal Transport: Approximation, Statistics and Debiasing
M. Scetbon
Marco Cuturi
OT
29
16
0
24 May 2022
Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization
  under Infinite Noise Variance
Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance
Nuri Mert Vural
Lu Yu
Krishnakumar Balasubramanian
S. Volgushev
Murat A. Erdogdu
15
23
0
23 Feb 2022
Convergence Rates for the MAP of an Exponential Family and Stochastic
  Mirror Descent -- an Open Problem
Convergence Rates for the MAP of an Exponential Family and Stochastic Mirror Descent -- an Open Problem
Rémi Le Priol
Frederik Kunstner
Damien Scieur
Simon Lacoste-Julien
11
1
0
12 Nov 2021
Inducing Equilibria via Incentives: Simultaneous Design-and-Play Ensures
  Global Convergence
Inducing Equilibria via Incentives: Simultaneous Design-and-Play Ensures Global Convergence
Boyi Liu
Jiayang Li
Zhuoran Yang
Hoi-To Wai
Mingyi Hong
Y. Nie
Zhaoran Wang
60
18
0
04 Oct 2021
A Bregman Learning Framework for Sparse Neural Networks
A Bregman Learning Framework for Sparse Neural Networks
Leon Bungert
Tim Roith
Daniel Tenbrinck
Martin Burger
16
17
0
10 May 2021
AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods
AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods
Zheng Shi
Abdurakhmon Sadiev
Nicolas Loizou
Peter Richtárik
Martin Takávc
ODL
32
13
0
19 Feb 2021
L4: Practical loss-based stepsize adaptation for deep learning
L4: Practical loss-based stepsize adaptation for deep learning
Michal Rolínek
Georg Martius
ODL
36
63
0
14 Feb 2018
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
171
683
0
07 Dec 2010
1