53
3

Large-Scale Non-convex Stochastic Constrained Distributionally Robust Optimization

Abstract

Distributionally robust optimization (DRO) is a powerful framework for training robust models against data distribution shifts. This paper focuses on constrained DRO, which has an explicit characterization of the robustness level. Existing studies on constrained DRO mostly focus on convex loss function, and exclude the practical and challenging case with non-convex loss function, e.g., neural network. This paper develops a stochastic algorithm and its performance analysis for non-convex constrained DRO. The computational complexity of our stochastic algorithm at each iteration is independent of the overall dataset size, and thus is suitable for large-scale applications. We focus on the general Cressie-Read family divergence defined uncertainty set which includes χ2\chi^2-divergences as a special case. We prove that our algorithm finds an ϵ\epsilon-stationary point with a computational complexity of O(ϵ3k5)\mathcal O(\epsilon^{-3k_*-5}), where kk_* is the parameter of the Cressie-Read divergence. The numerical results indicate that our method outperforms existing methods.} Our method also applies to the smoothed conditional value at risk (CVaR) DRO.

View on arXiv
Comments on this paper