ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1507.00500
19
101

Non-convex Regularizations for Feature Selection in Ranking With Sparse SVM

2 July 2015
Léa Laporte
Rémi Flamary
S. Canu
Sébastien Déjean
Josiane Mothe
ArXivPDFHTML
Abstract

Feature selection in learning to rank has recently emerged as a crucial issue. Whereas several preprocessing approaches have been proposed, only a few works have been focused on integrating the feature selection into the learning process. In this work, we propose a general framework for feature selection in learning to rank using SVM with a sparse regularization term. We investigate both classical convex regularizations such as ℓ_1\ell\_1ℓ_1 or weighted ℓ_1\ell\_1ℓ_1 and non-convex regularization terms such as log penalty, Minimax Concave Penalty (MCP) or ℓ_p\ell\_pℓ_p pseudo norm with p\textless1p\textless{}1p\textless1. Two algorithms are proposed, first an accelerated proximal approach for solving the convex problems, second a reweighted ℓ_1\ell\_1ℓ_1 scheme to address the non-convex regularizations. We conduct intensive experiments on nine datasets from Letor 3.0 and Letor 4.0 corpora. Numerical results show that the use of non-convex regularizations we propose leads to more sparsity in the resulting models while prediction performance is preserved. The number of features is decreased by up to a factor of six compared to the ℓ_1\ell\_1ℓ_1 regularization. In addition, the software is publicly available on the web.

View on arXiv
Comments on this paper