ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.04623
  4. Cited By
Logarithmic Regret for Online Gradient Descent Beyond Strong Convexity
v1v2 (latest)

Logarithmic Regret for Online Gradient Descent Beyond Strong Convexity

13 February 2018
Dan Garber
ArXiv (abs)PDFHTML

Papers citing "Logarithmic Regret for Online Gradient Descent Beyond Strong Convexity"

5 / 5 papers shown
Title
On Corruption-Robustness in Performative Reinforcement Learning
On Corruption-Robustness in Performative Reinforcement Learning
Vasilis Pollatos
Debmalya Mandal
Goran Radanović
190
2
0
08 May 2025
Introduction to Online Convex Optimization
Introduction to Online Convex Optimization
Elad Hazan
OffRL
193
1,940
0
07 Sep 2019
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark Schmidt
280
1,221
0
16 Aug 2016
Efficient Second Order Online Learning by Sketching
Efficient Second Order Online Learning by Sketching
Haipeng Luo
Alekh Agarwal
Nicolò Cesa-Bianchi
John Langford
81
96
0
06 Feb 2016
A simpler approach to obtaining an O(1/t) convergence rate for the
  projected stochastic subgradient method
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method
Simon Lacoste-Julien
Mark Schmidt
Francis R. Bach
190
261
0
10 Dec 2012
1