14
0

Fairness Practices in Industry: A Case Study in Machine Learning Teams Building Recommender Systems

Abstract

The rapid proliferation of recommender systems necessitates robust fairness practices to address inherent biases. Assessing fairness, though, is challenging due to constantly evolving metrics and best practices. This paper analyzes how industry practitioners perceive and incorporate these changing fairness standards in their workflows. Through semi-structured interviews with 11 practitioners from technical teams across a range of large technology companies, we investigate industry implementations of fairness in recommendation system products. We focus on current debiasing practices, applied metrics, collaborative strategies, and integrating academic research into practice. Findings show a preference for multi-dimensional debiasing over traditional demographic methods, and a reliance on intuitive rather than academic metrics. This study also highlights the difficulties in balancing fairness with both the practitioner's individual (bottom-up) roles and organizational (top-down) workplace constraints, including the interplay with legal and compliance experts. Finally, we offer actionable recommendations for the recommender system community and algorithmic fairness practitioners, underlining the need to refine fairness practices continually.

View on arXiv
@article{yan2025_2505.19441,
  title={ Fairness Practices in Industry: A Case Study in Machine Learning Teams Building Recommender Systems },
  author={ Jing Nathan Yan and Junxiong Wang and Jeffrey M. Rzeszotarski and Allison Koenecke },
  journal={arXiv preprint arXiv:2505.19441},
  year={ 2025 }
}
Comments on this paper