ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.08032
11
0

Riemannian Manifold Optimization for Discriminant Subspace Learning

20 January 2021
Wanguang Yin
Zhengming Ma
Quanying Liu
ArXivPDFHTML
Abstract

Linear discriminant analysis (LDA) is a widely used algorithm in machine learning to extract a low-dimensional representation of high-dimensional data, it features to find the orthogonal discriminant projection subspace by using the Fisher discriminant criterion. However, the traditional Euclidean-based methods for solving LDA are easily convergent to spurious local minima and hardly obtain an optimal solution. To address such a problem, in this paper, we propose a novel algorithm namely Riemannian-based discriminant analysis (RDA) for subspace learning. In order to obtain an explicit solution, we transform the traditional Euclidean-based methods to the Riemannian manifold space and use the trust-region method to learn the discriminant projection subspace. We compare the proposed algorithm to existing variants of LDA, as well as the unsupervised tensor decomposition methods on image classification tasks. The numerical results suggest that RDA achieves state-of-the-art performance in classification accuracy.

View on arXiv
Comments on this paper