ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.09013
8
3

Artificial Constraints and Lipschitz Hints for Unconstrained Online Learning

24 February 2019
Ashok Cutkosky
ArXivPDFHTML
Abstract

We provide algorithms that guarantee regret RT(u)≤O~(G∥u∥3+G(∥u∥+1)T)R_T(u)\le \tilde O(G\|u\|^3 + G(\|u\|+1)\sqrt{T})RT​(u)≤O~(G∥u∥3+G(∥u∥+1)T​) or RT(u)≤O~(G∥u∥3T1/3+GT1/3+G∥u∥T)R_T(u)\le \tilde O(G\|u\|^3T^{1/3} + GT^{1/3}+ G\|u\|\sqrt{T})RT​(u)≤O~(G∥u∥3T1/3+GT1/3+G∥u∥T​) for online convex optimization with GGG-Lipschitz losses for any comparison point uuu without prior knowledge of either GGG or ∥u∥\|u\|∥u∥. Previous algorithms dispense with the O(∥u∥3)O(\|u\|^3)O(∥u∥3) term at the expense of knowledge of one or both of these parameters, while a lower bound shows that some additional penalty term over G∥u∥TG\|u\|\sqrt{T}G∥u∥T​ is necessary. Previous penalties were exponential while our bounds are polynomial in all quantities. Further, given a known bound ∥u∥≤D\|u\|\le D∥u∥≤D, our same techniques allow us to design algorithms that adapt optimally to the unknown value of ∥u∥\|u\|∥u∥ without requiring knowledge of GGG.

View on arXiv
Comments on this paper