Smoothed Online Optimization for Regression and Control

We consider Online Convex Optimization (OCO) in the setting where the costs are -strongly convex and the online learner pays a switching cost for changing decisions between rounds. We show that the recently proposed Online Balanced Descent (OBD) algorithm is constant competitive in this setting, with competitive ratio , irrespective of the ambient dimension. Additionally, we show that when the sequence of cost functions is -smooth, OBD has near-optimal dynamic regret and maintains strong per-round accuracy. We demonstrate the generality of our approach by showing that the OBD framework can be used to construct competitive algorithms for a variety of online problems across learning and control, including online variants of ridge regression, logistic regression, maximum likelihood estimation, and LQR control.
View on arXiv