ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1203.0565
75
61

Fast learning rate of multiple kernel learning: Trade-off between sparsity and smoothness

2 March 2012
Taiji Suzuki
Masashi Sugiyama
ArXivPDFHTML
Abstract

We investigate the learning rate of multiple kernel learning (MKL) with ℓ1\ell_1ℓ1​ and elastic-net regularizations. The elastic-net regularization is a composition of an ℓ1\ell_1ℓ1​-regularizer for inducing the sparsity and an ℓ2\ell_2ℓ2​-regularizer for controlling the smoothness. We focus on a sparse setting where the total number of kernels is large, but the number of nonzero components of the ground truth is relatively small, and show sharper convergence rates than the learning rates have ever shown for both ℓ1\ell_1ℓ1​ and elastic-net regularizations. Our analysis reveals some relations between the choice of a regularization function and the performance. If the ground truth is smooth, we show a faster convergence rate for the elastic-net regularization with less conditions than ℓ1\ell_1ℓ1​-regularization; otherwise, a faster convergence rate for the ℓ1\ell_1ℓ1​-regularization is shown.

View on arXiv
Comments on this paper