ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.03446
61
0

Biased Heritage: How Datasets Shape Models in Facial Expression Recognition

5 March 2025
Iris Dominguez-Catena
D. Paternain
M. Galar
Marybeth Defrance
Maarten Buyl
Tijl De Bie
ArXivPDFHTML
Abstract

In recent years, the rapid development of artificial intelligence (AI) systems has raised concerns about our ability to ensure their fairness, that is, how to avoid discrimination based on protected characteristics such as gender, race, or age. While algorithmic fairness is well-studied in simple binary classification tasks on tabular data, its application to complex, real-world scenarios-such as Facial Expression Recognition (FER)-remains underexplored. FER presents unique challenges: it is inherently multiclass, and biases emerge across intersecting demographic variables, each potentially comprising multiple protected groups. We present a comprehensive framework to analyze bias propagation from datasets to trained models in image-based FER systems, while introducing new bias metrics specifically designed for multiclass problems with multiple demographic groups. Our methodology studies bias propagation by (1) inducing controlled biases in FER datasets, (2) training models on these biased datasets, and (3) analyzing the correlation between dataset bias metrics and model fairness notions. Our findings reveal that stereotypical biases propagate more strongly to model predictions than representational biases, suggesting that preventing emotion-specific demographic patterns should be prioritized over general demographic balance in FER datasets. Additionally, we observe that biased datasets lead to reduced model accuracy, challenging the assumed fairness-accuracy trade-off.

View on arXiv
@article{dominguez-catena2025_2503.03446,
  title={ Biased Heritage: How Datasets Shape Models in Facial Expression Recognition },
  author={ Iris Dominguez-Catena and Daniel Paternain and Mikel Galar and MaryBeth Defrance and Maarten Buyl and Tijl De Bie },
  journal={arXiv preprint arXiv:2503.03446},
  year={ 2025 }
}
Comments on this paper