ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.01608
  4. Cited By
Dynamic Regret of Adaptive Gradient Methods for Strongly Convex Problems

Dynamic Regret of Adaptive Gradient Methods for Strongly Convex Problems

4 September 2022
Parvin Nazari
E. Khorram
    ODL
ArXivPDFHTML

Papers citing "Dynamic Regret of Adaptive Gradient Methods for Strongly Convex Problems"

16 / 16 papers shown
Title
Dynamic Regret Analysis for Online Meta-Learning
Dynamic Regret Analysis for Online Meta-Learning
Parvin Nazari
E. Khorram
CLL
71
5
0
29 Sep 2021
Adaptive First-and Zeroth-order Methods for Weakly Convex Stochastic
  Optimization Problems
Adaptive First-and Zeroth-order Methods for Weakly Convex Stochastic Optimization Problems
Parvin Nazari
Davoud Ataee Tarzanagh
George Michailidis
ODL
41
13
0
19 May 2020
A new regret analysis for Adam-type algorithms
A new regret analysis for Adam-type algorithms
Ahmet Alacaoglu
Yura Malitsky
P. Mertikopoulos
Volkan Cevher
ODL
56
41
0
21 Mar 2020
Introduction to Online Convex Optimization
Introduction to Online Convex Optimization
Elad Hazan
OffRL
172
1,929
0
07 Sep 2019
On the Convergence of Adam and Beyond
On the Convergence of Adam and Beyond
Sashank J. Reddi
Satyen Kale
Surinder Kumar
93
2,499
0
19 Apr 2019
DADAM: A Consensus-based Distributed Adaptive Gradient Method for Online
  Optimization
DADAM: A Consensus-based Distributed Adaptive Gradient Method for Online Optimization
Parvin Nazari
Davoud Ataee Tarzanagh
George Michailidis
ODL
69
67
0
25 Jan 2019
Variants of RMSProp and Adagrad with Logarithmic Regret Bounds
Variants of RMSProp and Adagrad with Logarithmic Regret Bounds
Mahesh Chandra Mukkamala
Matthias Hein
ODL
54
258
0
17 Jun 2017
Improved Dynamic Regret for Non-degenerate Functions
Improved Dynamic Regret for Non-degenerate Functions
Lijun Zhang
Tianbao Yang
Jinfeng Yi
Jing Rong
Zhi Zhou
226
127
0
13 Aug 2016
Tracking Slowly Moving Clairvoyant: Optimal Dynamic Regret of Online
  Learning with True and Noisy Gradient
Tracking Slowly Moving Clairvoyant: Optimal Dynamic Regret of Online Learning with True and Noisy Gradient
Tianbao Yang
Lijun Zhang
Rong Jin
Jinfeng Yi
57
155
0
16 May 2016
MetaGrad: Multiple Learning Rates in Online Learning
MetaGrad: Multiple Learning Rates in Online Learning
T. Erven
Wouter M. Koolen
ODL
84
98
0
29 Apr 2016
Online Distributed Optimization on Dynamic Networks
Online Distributed Optimization on Dynamic Networks
Saghar Hosseini
Airlie Chapman
M. Mesbahi
77
145
0
22 Dec 2014
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.8K
150,115
0
22 Dec 2014
Non-stationary Stochastic Optimization
Non-stationary Stochastic Optimization
Omar Besbes
Y. Gur
A. Zeevi
178
433
0
20 Jul 2013
Dynamical Models and Tracking Regret in Online Convex Programming
Dynamical Models and Tracking Regret in Online Convex Programming
Eric C. Hall
Rebecca Willett
98
116
0
07 Jan 2013
ADADELTA: An Adaptive Learning Rate Method
ADADELTA: An Adaptive Learning Rate Method
Matthew D. Zeiler
ODL
152
6,625
0
22 Dec 2012
Adaptive Bound Optimization for Online Convex Optimization
Adaptive Bound Optimization for Online Convex Optimization
H. B. McMahan
Matthew J. Streeter
ODL
98
388
0
26 Feb 2010
1