ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1505.04617
18
17

Joint Representation Classification for Collective Face Recognition

18 May 2015
Liping Wang
Songcan Chen
    CVBM
ArXivPDFHTML
Abstract

Sparse representation based classification (SRC) is popularly used in many applications such as face recognition, and implemented in two steps: representation coding and classification. For a given set of testing images, SRC codes every image over the base images as a sparse representation then classifies it to the class with the least representation error. This scheme utilizes an individual representation rather than the collective one to classify such a set of images, doing so obviously ignores the correlation among the given images. In this paper, a joint representation classification (JRC) for collective face recognition is proposed. JRC takes the correlation of multiple images as well as a single representation into account. Under the assumption that the given face images are generally related to each other, JRC codes all the testing images over the base images simultaneously to facilitate recognition. To this end, the testing inputs are aligned into a matrix and the joint representation coding is formulated to a generalized l2,q−l2,pl_{2,q}-l_{2,p}l2,q​−l2,p​-minimization problem. To uniformly solve the induced optimization problems for any q∈[1,2]q\in[1,2]q∈[1,2] and p∈(0,2]p\in (0,2]p∈(0,2], an iterative quadratic method (IQM) is developed. IQM is proved to be a strict descent algorithm with convergence to the optimal solution. Moreover, a more practical IQM is proposed for large-scale case. Experimental results on three public databases show that the JRC with practical IQM no only saves much computational cost but also achieves better performance in collective face recognition than the state-of-the-arts.

View on arXiv
Comments on this paper