ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1312.5465
40
15

Learning rates of lql^qlq coefficient regularization learning with Gaussian kernel

19 December 2013
Shaobo Lin
Jinshan Zeng
Jian Fang
Zongben Xu
ArXivPDFHTML
Abstract

Regularization is a well recognized powerful strategy to improve the performance of a learning machine and lql^qlq regularization schemes with 0<q<∞0<q<\infty0<q<∞ are central in use. It is known that different qqq leads to different properties of the deduced estimators, say, l2l^2l2 regularization leads to smooth estimators while l1l^1l1 regularization leads to sparse estimators. Then, how does the generalization capabilities of lql^qlq regularization learning vary with qqq? In this paper, we study this problem in the framework of statistical learning theory and show that implementing lql^qlq coefficient regularization schemes in the sample dependent hypothesis space associated with Gaussian kernel can attain the same almost optimal learning rates for all 0<q<∞0<q<\infty0<q<∞. That is, the upper and lower bounds of learning rates for lql^qlq regularization learning are asymptotically identical for all 0<q<∞0<q<\infty0<q<∞. Our finding tentatively reveals that, in some modeling contexts, the choice of qqq might not have a strong impact with respect to the generalization capability. From this perspective, qqq can be arbitrarily specified, or specified merely by other no generalization criteria like smoothness, computational complexity, sparsity, etc..

View on arXiv
Comments on this paper