Second-Order Kernel Online Convex Optimization with Adaptive Sketching

Kernel online convex optimization (KOCO) is a framework combining the expressiveness of non-parametric kernel models with the regret guarantees of online learning. First-order KOCO methods such as functional gradient descent require only time and space per iteration, and, when the only information on the losses is their convexity, achieve a minimax optimal regret. Nonetheless, many common losses in kernel problems, such as squared loss, logistic loss, and squared hinge loss posses stronger curvature that can be exploited. In this case, second-order KOCO methods achieve regret, which we show scales as , where is the effective dimension of the problem and is usually much smaller than . The main drawback of second-order methods is their much higher space and time complexity. In this paper, we introduce kernel online Newton step (KONS), a new second-order KOCO method that also achieves regret. To address the computational complexity of second-order methods, we introduce a new matrix sketching algorithm for the kernel matrix , and show that for a chosen parameter our Sketched-KONS reduces the space and time complexity by a factor of to space and time per iteration, while incurring only times more regret.
View on arXiv