ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.00176
74
0

Discriminative Subspace Emersion from learning feature relevances across different populations

31 March 2025
Marco Canducci
Lida Abdi
Alessandro Prete
Roland J. Veen
Michael Biehl
W. Arlt
Peter Tiño
ArXivPDFHTML
Abstract

In a given classification task, the accuracy of the learner is often hampered by finiteness of the training set, high-dimensionality of the feature space and severe overlap between classes. In the context of interpretable learners, with (piecewise) linear separation boundaries, these issues can be mitigated by careful construction of optimization procedures and/or estimation of relevant features for the task. However, when the task is shared across two disjoint populations the main interest is shifted towards estimating a set of features that discriminate the most between the two, when performing classification. We propose a new Discriminative Subspace Emersion (DSE) method to extend subspace learning toward a general relevance learning framework. DSE allows us to identify the most relevant features in distinguishing the classification task across two populations, even in cases of high overlap between classes. The proposed methodology is designed to work with multiple sets of labels and is derived in principle without being tied to a specific choice of base learner. Theoretical and empirical investigations over synthetic and real-world datasets indicate that DSE accurately identifies a common subspace for the classification across different populations. This is shown to be true for a surprisingly high degree of overlap between classes.

View on arXiv
@article{canducci2025_2504.00176,
  title={ Discriminative Subspace Emersion from learning feature relevances across different populations },
  author={ Marco Canducci and Lida Abdi and Alessandro Prete and Roland J. Veen and Michael Biehl and Wiebke Arlt and Peter Tino },
  journal={arXiv preprint arXiv:2504.00176},
  year={ 2025 }
}
Comments on this paper