ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.16838
54
1
v1v2v3v4v5 (latest)

Solving Kernel Ridge Regression with Gradient-Based Optimization Methods

29 June 2023
Oskar Allerbo
ArXiv (abs)PDFHTML
Abstract

Kernel ridge regression, KRR, is a generalization of linear ridge regression that is non-linear in the data, but linear in the parameters. Here, we introduce an equivalent formulation of the objective function of KRR, opening up both for using penalties other than the ridge penalty and for studying kernel ridge regression from the perspective of gradient descent. Using a continuous-time perspective, we derive a closed-form solution for solving kernel regression with gradient descent, something we refer to as kernel gradient flow, KGF, and theoretically bound the differences between KRR and KGF, where, for the latter, regularization is obtained through early stopping. We also generalize KRR by replacing the ridge penalty with the ℓ1\ell_1ℓ1​ and ℓ∞\ell_\inftyℓ∞​ penalties, respectively, and use the fact that analogous to the similarities between KGF and KRR, ℓ1\ell_1ℓ1​ regularization and forward stagewise regression (also known as coordinate descent), and ℓ∞\ell_\inftyℓ∞​ regularization and sign gradient descent, follow similar solution paths. We can thus alleviate the need for computationally heavy algorithms based on proximal gradient descent. We show theoretically and empirically how the ℓ1\ell_1ℓ1​ and ℓ∞\ell_\inftyℓ∞​ penalties, and the corresponding gradient-based optimization algorithms, produce sparse and robust kernel regression solutions, respectively.

View on arXiv
Comments on this paper