Analytic natural gradient updates for Cholesky factor in Gaussian variational approximation

Natural gradients can improve convergence in stochastic variational inference significantly but inverting the Fisher information matrix is daunting in high dimensions. Moreover, in Gaussian variational approximation, natural gradient updates of the precision matrix do not ensure positive definiteness. To tackle this issue, we derive analytic natural gradient updates of the Cholesky factor of the covariance or precision matrix, and consider sparsity constraints representing different posterior correlation structures. Stochastic normalized natural gradient ascent with momentum is proposed for implementation in generalized linear mixed models and deep neural networks.
View on arXiv