ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 0909.1334
105
3
v1v2 (latest)

Lower Bounds for BMRM and Faster Rates for Training SVMs

7 September 2009
Ankan Saha
Xinhua Zhang
Nicta
ArXiv (abs)PDFHTML
Abstract

Regularized risk minimization with the binary hinge loss and its variants lies at the heart of many machine learning problems. Bundle methods for regularized risk minimization (BMRM) and the closely related SVMStruct are considered the best general purpose solvers to tackle this problem. It was recently shown that BMRM requires O(1/ϵ)O(1/\epsilon)O(1/ϵ) iterations to converge to an ϵ\epsilonϵ accurate solution. In the first part of the paper we use the Hadamard matrix to construct a regularized risk minimization problem and show that these rates cannot be improved. We then show how one can exploit the structure of the objective function to devise an algorithm for the binary hinge loss which converges to an ϵ\epsilonϵ accurate solution in O(1/ϵ)O(1/\sqrt{\epsilon})O(1/ϵ​) iterations.

View on arXiv
Comments on this paper