53
11

Littlestone Classes are Privately Online Learnable

Abstract

We consider the problem of online classification under a privacy constraint. In this setting a learner observes sequentially a stream of labelled examples (xt,yt)(x_t, y_t), for 1tT1 \leq t \leq T, and returns at each iteration tt a hypothesis hth_t which is used to predict the label of each new example xtx_t. The learner's performance is measured by her regret against a known hypothesis class H\mathcal{H}. We require that the algorithm satisfies the following privacy constraint: the sequence h1,,hTh_1, \ldots, h_T of hypotheses output by the algorithm needs to be an (ϵ,δ)(\epsilon, \delta)-differentially private function of the whole input sequence (x1,y1),,(xT,yT)(x_1, y_1), \ldots, (x_T, y_T). We provide the first non-trivial regret bound for the realizable setting. Specifically, we show that if the class H\mathcal{H} has constant Littlestone dimension then, given an oblivious sequence of labelled examples, there is a private learner that makes in expectation at most O(logT)O(\log T) mistakes -- comparable to the optimal mistake bound in the non-private case, up to a logarithmic factor. Moreover, for general values of the Littlestone dimension dd, the same mistake bound holds but with a doubly-exponential in dd factor. A recent line of work has demonstrated a strong connection between classes that are online learnable and those that are differentially-private learnable. Our results strengthen this connection and show that an online learning algorithm can in fact be directly privatized (in the realizable setting). We also discuss an adaptive setting and provide a sublinear regret bound of O(T)O(\sqrt{T}).

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.