The debate around bias in AI systems is central to discussions on algorithmic fairness. However, the term bias often lacks a clear definition, despite frequently being contrasted with fairness, implying that an unbiased model is inherently fair. In this paper, we challenge this assumption and argue that a precise conceptualization of bias is necessary to effectively address fairness concerns. Rather than viewing bias as inherently negative or unfair, we highlight the importance of distinguishing between bias and discrimination. We further explore how this shift in focus can foster a more constructive discourse within academic debates on fairness in AI systems.
View on arXiv@article{lindloff2025_2502.18060, title={ Defining bias in AI-systems: Biased models are fair models }, author={ Chiara Lindloff and Ingo Siegert }, journal={arXiv preprint arXiv:2502.18060}, year={ 2025 } }