21
51

High Dimensional Differentially Private Stochastic Optimization with Heavy-tailed Data

Abstract

As one of the most fundamental problems in machine learning, statistics and differential privacy, Differentially Private Stochastic Convex Optimization (DP-SCO) has been extensively studied in recent years. However, most of the previous work can only handle either regular data distribution or irregular data in the low dimensional space case. To better understand the challenges arising from irregular data distribution, in this paper we provide the first study on the problem of DP-SCO with heavy-tailed data in the high dimensional space. In the first part we focus on the problem over some polytope constraint (such as the 1\ell_1-norm ball). We show that if the loss function is smooth and its gradient has bounded second order moment, it is possible to get a (high probability) error bound (excess population risk) of O~(logd(nϵ)13)\tilde{O}(\frac{\log d}{(n\epsilon)^\frac{1}{3}}) in the ϵ\epsilon-DP model, where nn is the sample size and dd is the dimensionality of the underlying space. Next, for LASSO, if the data distribution that has bounded fourth-order moments, we improve the bound to O~(logd(nϵ)25)\tilde{O}(\frac{\log d}{(n\epsilon)^\frac{2}{5}}) in the (ϵ,δ)(\epsilon, \delta)-DP model. In the second part of the paper, we study sparse learning with heavy-tailed data. We first revisit the sparse linear model and propose a truncated DP-IHT method whose output could achieve an error of O~(s2logdnϵ)\tilde{O}(\frac{s^{*2}\log d}{n\epsilon}), where ss^* is the sparsity of the underlying parameter. Then we study a more general problem over the sparsity ({\em i.e.,} 0\ell_0-norm) constraint, and show that it is possible to achieve an error of O~(s32logdnϵ)\tilde{O}(\frac{s^{*\frac{3}{2}}\log d}{n\epsilon}), which is also near optimal up to a factor of O~(s)\tilde{O}{(\sqrt{s^*})}, if the loss function is smooth and strongly convex.

View on arXiv
Comments on this paper