19
12

Solving Stochastic Optimization with Expectation Constraints Efficiently by a Stochastic Augmented Lagrangian-Type Algorithm

Abstract

This paper considers the problem of minimizing a convex expectation function with a set of inequality convex expectation constraints. We present a computable stochastic approximation type algorithm, namely the stochastic linearized proximal method of multipliers, to solve this convex stochastic optimization problem. This algorithm can be roughly viewed as a hybrid of stochastic approximation and the traditional proximal method of multipliers. Under mild conditions, we show that this algorithm exhibits O(K1/2)O(K^{-1/2}) expected convergence rates for both objective reduction and constraint violation if parameters in the algorithm are properly chosen, where KK denotes the number of iterations. Moreover, we show that, with high probability, the algorithm has O(log(K)K1/2)O(\log(K)K^{-1/2}) constraint violation bound and O(log3/2(K)K1/2)O(\log^{3/2}(K)K^{-1/2}) objective bound. Some preliminary numerical results demonstrate the performance of the proposed algorithm.

View on arXiv
Comments on this paper