We provide algorithms that guarantee regret or for online convex optimization with -Lipschitz losses for any comparison point without prior knowledge of either or . Previous algorithms dispense with the term at the expense of knowledge of one or both of these parameters, while a lower bound shows that some additional penalty term over is necessary. Previous penalties were exponential while our bounds are polynomial in all quantities. Further, given a known bound , our same techniques allow us to design algorithms that adapt optimally to the unknown value of without requiring knowledge of .
View on arXiv