ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.04952
13
12

Do Subsampled Newton Methods Work for High-Dimensional Data?

13 February 2019
Xiang Li
Shusen Wang
Zhihua Zhang
ArXivPDFHTML
Abstract

Subsampled Newton methods approximate Hessian matrices through subsampling techniques, alleviating the cost of forming Hessian matrices but using sufficient curvature information. However, previous results require Ω(d)\Omega (d)Ω(d) samples to approximate Hessians, where ddd is the dimension of data points, making it less practically feasible for high-dimensional data. The situation is deteriorated when ddd is comparably as large as the number of data points nnn, which requires to take the whole dataset into account, making subsampling useless. This paper theoretically justifies the effectiveness of subsampled Newton methods on high dimensional data. Specifically, we prove only Θ~(deffγ)\widetilde{\Theta}(d^\gamma_{\rm eff})Θ(deffγ​) samples are needed in the approximation of Hessian matrices, where deffγd^\gamma_{\rm eff}deffγ​ is the γ\gammaγ-ridge leverage and can be much smaller than ddd as long as nγ≫1n\gamma \gg 1nγ≫1. Additionally, we extend this result so that subsampled Newton methods can work for high-dimensional data on both distributed optimization problems and non-smooth regularized problems.

View on arXiv
Comments on this paper