14
1

Adaptive Stochastic Gradient Langevin Dynamics: Taming Convergence and Saddle Point Escape Time

Abstract

In this paper, we propose a new adaptive stochastic gradient Langevin dynamics (ASGLD) algorithmic framework and its two specialized versions, namely adaptive stochastic gradient (ASG) and adaptive gradient Langevin dynamics(AGLD), for non-convex optimization problems. All proposed algorithms can escape from saddle points with at most O(logd)O(\log d) iterations, which is nearly dimension-free. Further, we show that ASGLD and ASG converge to a local minimum with at most O(logd/ϵ4)O(\log d/\epsilon^4) iterations. Also, ASGLD with full gradients or ASGLD with a slowly linearly increasing batch size converge to a local minimum with iterations bounded by O(logd/ϵ2)O(\log d/\epsilon^2), which outperforms existing first-order methods.

View on arXiv
Comments on this paper