12
70

Smoothed Online Optimization for Regression and Control

Gautam Goel
Adam Wierman
Abstract

We consider Online Convex Optimization (OCO) in the setting where the costs are mm-strongly convex and the online learner pays a switching cost for changing decisions between rounds. We show that the recently proposed Online Balanced Descent (OBD) algorithm is constant competitive in this setting, with competitive ratio 3+O(1/m)3 + O(1/m), irrespective of the ambient dimension. Additionally, we show that when the sequence of cost functions is ϵ\epsilon-smooth, OBD has near-optimal dynamic regret and maintains strong per-round accuracy. We demonstrate the generality of our approach by showing that the OBD framework can be used to construct competitive algorithms for a variety of online problems across learning and control, including online variants of ridge regression, logistic regression, maximum likelihood estimation, and LQR control.

View on arXiv
Comments on this paper