On the Powerball Method

Abstract
We propose a new method to accelerate the convergence of optimization algorithms. This method simply adds a power coefficient to the gradient during optimization. We call this the Powerball method and analyze the convergence rate for the Powerball method for strongly convex functions and show that it has a faster convergence rate than gradient descent and Newton's method in the initial iterations. We also demonstrate that the Powerball method provides a -fold speed up of the convergence of both gradient descent and L-BFGS on multiple real datasets.
View on arXivComments on this paper