ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.18060
58
0

Defining bias in AI-systems: Biased models are fair models

25 February 2025
Chiara Lindloff
Ingo Siegert
    FaML
ArXivPDFHTML
Abstract

The debate around bias in AI systems is central to discussions on algorithmic fairness. However, the term bias often lacks a clear definition, despite frequently being contrasted with fairness, implying that an unbiased model is inherently fair. In this paper, we challenge this assumption and argue that a precise conceptualization of bias is necessary to effectively address fairness concerns. Rather than viewing bias as inherently negative or unfair, we highlight the importance of distinguishing between bias and discrimination. We further explore how this shift in focus can foster a more constructive discourse within academic debates on fairness in AI systems.

View on arXiv
@article{lindloff2025_2502.18060,
  title={ Defining bias in AI-systems: Biased models are fair models },
  author={ Chiara Lindloff and Ingo Siegert },
  journal={arXiv preprint arXiv:2502.18060},
  year={ 2025 }
}
Comments on this paper