ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.00844
35
1
v1v2 (latest)

Deriving discriminative classifiers from generative models

3 January 2022
E. Azeraf
E. Monfrini
W. Pieczynski
ArXiv (abs)PDFHTML
Abstract

We deal with Bayesian generative and discriminative classifiers. Given a model distribution p(x,y)p(x, y)p(x,y), with the observation yyy and the target xxx, one computes generative classifiers by firstly considering p(x,y)p(x, y)p(x,y) and then using the Bayes rule to calculate p(x∣y)p(x | y)p(x∣y). A discriminative model is directly given by p(x∣y)p(x | y)p(x∣y), which is used to compute discriminative classifiers. However, recent works showed that the Bayesian Maximum Posterior classifier defined from the Naive Bayes (NB) or Hidden Markov Chain (HMC), both generative models, can also match the discriminative classifier definition. Thus, there are situations in which dividing classifiers into "generative" and "discriminative" is somewhat misleading. Indeed, such a distinction is rather related to the way of computing classifiers, not to the classifiers themselves. We present a general theoretical result specifying how a generative classifier induced from a generative model can also be computed in a discriminative way from the same model. Examples of NB and HMC are found again as particular cases, and we apply the general result to two original extensions of NB, and two extensions of HMC, one of which being original. Finally, we shortly illustrate the interest of the new discriminative way of computing classifiers in the Natural Language Processing (NLP) framework.

View on arXiv
Comments on this paper