ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.08212
18
37

Tight Nonparametric Convergence Rates for Stochastic Gradient Descent under the Noiseless Linear Model

15 June 2020
Raphael Berthier
Francis R. Bach
Pierre Gaillard
ArXivPDFHTML
Abstract

In the context of statistical supervised learning, the noiseless linear model assumes that there exists a deterministic linear relation Y=⟨θ∗,X⟩Y = \langle \theta_*, X \rangleY=⟨θ∗​,X⟩ between the random output YYY and the random feature vector Φ(U)\Phi(U)Φ(U), a potentially non-linear transformation of the inputs UUU. We analyze the convergence of single-pass, fixed step-size stochastic gradient descent on the least-square risk under this model. The convergence of the iterates to the optimum θ∗\theta_*θ∗​ and the decay of the generalization error follow polynomial convergence rates with exponents that both depend on the regularities of the optimum θ∗\theta_*θ∗​ and of the feature vectors Φ(u)\Phi(u)Φ(u). We interpret our result in the reproducing kernel Hilbert space framework. As a special case, we analyze an online algorithm for estimating a real function on the unit interval from the noiseless observation of its value at randomly sampled points; the convergence depends on the Sobolev smoothness of the function and of a chosen kernel. Finally, we apply our analysis beyond the supervised learning setting to obtain convergence rates for the averaging process (a.k.a. gossip algorithm) on a graph depending on its spectral dimension.

View on arXiv
Comments on this paper