Stochastic gradient descent on Riemannian manifolds
IEEE Transactions on Automatic Control (TAC), 2011
Silvere Bonnabel
Abstract
Stochastic gradient descent is a simple appproach to find the local minima of a function whose evaluations are corrupted by noise. In this paper, mostly motivated by machine learning applications, we develop a procedure extending stochastic gradient descent algorithms to the case where the function is defined on a Riemannian manifold. We prove that, as in the Euclidian case, the descent algorithm converges to a critical point of the cost function. The algorithm has numerous potential applications, and we show several well-known algorithms can be cast in our versatile geometric framework. We also address the gain tuning issue in connection with the tools of the recent theory of symmetry-preserving observers.
View on arXivComments on this paper
