ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1702.08134
31
2
v1v2v3v4v5v6v7v8v9v10 (latest)

Online Partial Least Square Optimization: Dropping Convexity for Better Efficiency

27 February 2017
Zhehui Chen
Lin F. Yang
C. J. Li
T. Zhao
ArXiv (abs)PDFHTML
Abstract

Multiview representation learning is very popular for latent factor analysis. It naturally arises in many data analysis, machine learning, and information retrieval applications to model dependent structures among multiple data sources. For computational convenience, existing approaches usually formulate the multiview representation learning as convex optimization problems, where global optima can be obtained by certain algorithms in polynomial time. However, many evidences have corroborated that heuristic nonconvex approaches also have good empirical computational performance and convergence to the global optima, although there is a lack of theoretical justification. Such a gap between theory and practice motivates us to study a nonconvex formulation for multiview representation learning, which can be efficiently solved by two stochastic gradient descent (SGD) methods. Theoretically, by analyzing the dynamics of the algorithms based on diffusion processes, we establish global rates of convergence to the global optima with high probability. Numerical experiments are provided to support our theory.

View on arXiv
Comments on this paper