8
6

Online Learning in Dynamically Changing Environments

Abstract

We study the problem of online learning and online regret minimization when samples are drawn from a general unknown non-stationary process. We introduce the concept of a dynamic changing process with cost KK, where the conditional marginals of the process can vary arbitrarily, but that the number of different conditional marginals is bounded by KK over TT rounds. For such processes we prove a tight (upto logT\sqrt{\log T} factor) bound O(KTVC(H)logT)O(\sqrt{KT\cdot\mathsf{VC}(\mathcal{H})\log T}) for the expected worst case regret of any finite VC-dimensional class H\mathcal{H} under absolute loss (i.e., the expected miss-classification loss). We then improve this bound for general mixable losses, by establishing a tight (up to log3T\log^3 T factor) regret bound O(KVC(H)log3T)O(K\cdot\mathsf{VC}(\mathcal{H})\log^3 T). We extend these results to general smooth adversary processes with unknown reference measure by showing a sub-linear regret bound for 11-dimensional threshold functions under a general bounded convex loss. Our results can be viewed as a first step towards regret analysis with non-stationary samples in the distribution blind (universal) regime. This also brings a new viewpoint that shifts the study of complexity of the hypothesis classes to the study of the complexity of processes generating data.

View on arXiv
Comments on this paper