ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.03594
  4. Cited By
Proximal Online Gradient is Optimum for Dynamic Regret
v1v2v3v4v5v6 (latest)

Proximal Online Gradient is Optimum for Dynamic Regret

8 October 2018
Yawei Zhao
Shuang Qiu
Ji Liu
ArXiv (abs)PDFHTML

Papers citing "Proximal Online Gradient is Optimum for Dynamic Regret"

6 / 6 papers shown
Title
Introduction to Online Convex Optimization
Introduction to Online Convex Optimization
Elad Hazan
OffRL
175
1,932
0
07 Sep 2019
Adaptive Online Learning in Dynamic Environments
Adaptive Online Learning in Dynamic Environments
Lijun Zhang
Shiyin Lu
Zhi Zhou
67
185
0
25 Oct 2018
An Online Convex Optimization Approach to Dynamic Network Resource
  Allocation
An Online Convex Optimization Approach to Dynamic Network Resource Allocation
Tianyi Chen
Qing Ling
G. Giannakis
399
216
0
14 Jan 2017
Strongly Adaptive Online Learning
Strongly Adaptive Online Learning
Amit Daniely
Alon Gonen
Shai Shalev-Shwartz
ODL
167
178
0
25 Feb 2015
Non-stationary Stochastic Optimization
Non-stationary Stochastic Optimization
Omar Besbes
Y. Gur
A. Zeevi
183
434
0
20 Jul 2013
Mirror Descent Meets Fixed Share (and feels no regret)
Mirror Descent Meets Fixed Share (and feels no regret)
Nicolò Cesa-Bianchi
Pierre Gaillard
Gabor Lugosi
Gilles Stoltz
202
99
0
15 Feb 2012
1