ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.01257
19
20

Tight High Probability Bounds for Linear Stochastic Approximation with Fixed Stepsize

2 June 2021
Alain Durmus
Eric Moulines
A. Naumov
S. Samsonov
Kevin Scaman
Hoi-To Wai
ArXivPDFHTML
Abstract

This paper provides a non-asymptotic analysis of linear stochastic approximation (LSA) algorithms with fixed stepsize. This family of methods arises in many machine learning tasks and is used to obtain approximate solutions of a linear system Aˉθ=bˉ\bar{A}\theta = \bar{b}Aˉθ=bˉ for which Aˉ\bar{A}Aˉ and bˉ\bar{b}bˉ can only be accessed through random estimates {(An,bn):n∈N∗}\{({\bf A}_n, {\bf b}_n): n \in \mathbb{N}^*\}{(An​,bn​):n∈N∗}. Our analysis is based on new results regarding moments and high probability bounds for products of matrices which are shown to be tight. We derive high probability bounds on the performance of LSA under weaker conditions on the sequence {(An,bn):n∈N∗}\{({\bf A}_n, {\bf b}_n): n \in \mathbb{N}^*\}{(An​,bn​):n∈N∗} than previous works. However, in contrast, we establish polynomial concentration bounds with order depending on the stepsize. We show that our conclusions cannot be improved without additional assumptions on the sequence of random matrices {An:n∈N∗}\{{\bf A}_n: n \in \mathbb{N}^*\}{An​:n∈N∗}, and in particular that no Gaussian or exponential high probability bounds can hold. Finally, we pay a particular attention to establishing bounds with sharp order with respect to the number of iterations and the stepsize and whose leading terms contain the covariance matrices appearing in the central limit theorems.

View on arXiv
Comments on this paper