On the Powerball Method

Abstract
We propose a new method to accelerate the convergence of optimization algorithms. This method adds a power coefficient to the gradient during optimization. We call this the Powerball method after the well-known Heavy-ball method \cite{heavyball}. We prove that the Powerball method can achieve accuracy for strongly convex functions by using iterations. We also demonstrate that the Powerball method provides a -fold speed up of the convergence of both gradient descent and L-BFGS on multiple real datasets.
View on arXivComments on this paper