48

SVRG and Beyond via Posterior Correction

Nico Daheim
Thomas Möllenhoff
Ming Liang Ang
Mohammad Emtiyaz Khan
Main:10 Pages
7 Figures
Bibliography:4 Pages
1 Tables
Appendix:5 Pages
Abstract

Stochastic Variance Reduced Gradient (SVRG) and its variants aim to speed-up training by using gradient corrections, but have seen limited success in deep learning. Here, we show surprising new foundational connections of SVRG to a recently proposed Bayesian method called posterior correction. Specifically, we show that SVRG is recovered as a special case of posterior correction over the isotropic-Gaussian family, while novel extensions are automatically obtained by using more flexible exponential families. We derive two new SVRG variants by using Gaussian families: First, a Newton-like variant that employs novel Hessian corrections, and second, an Adam-like extension that improves pretraining and finetuning of Transformer language models. This is the first work to connect SVRG to Bayes and use it to boost variational training for deep networks.

View on arXiv
Comments on this paper