Stochastic gradient methods have enabled variational inference for high-dimensional models. However, the steepest ascent direction in the parameter space of a statistical model is actually given by the natural gradient which premultiplies the widely used Euclidean gradient by the inverse Fisher information. Use of natural gradients can improve convergence, but inverting the Fisher information matrix is daunting in high-dimensions. In Gaussian variational approximation, natural gradient updates of the mean and precision of the normal distribution can be derived analytically, but do not ensure that the precision matrix remains positive definite. To tackle this issue, we consider Cholesky decomposition of the covariance or precision matrix, and derive analytic natural gradient updates of the Cholesky factor, which depend on either the first or second derivative of the log posterior density. Efficient natural gradient updates of the Cholesky factor are also derived under sparsity constraints representing different posterior correlation structures. As Adam's adaptive learning rate does not work well with natural gradients, we propose stochastic normalized natural gradient ascent with momentum. The efficiency of proposed methods are demonstrated using logistic regression and generalized linear mixed models.
View on arXiv