37
0
v1v2v3 (latest)

Formalising Anti-Discrimination Law in Automated Decision Systems

Main:8 Pages
Bibliography:5 Pages
Appendix:1 Pages
Abstract

Algorithmic discrimination is a critical concern as machine learning models are used in high-stakes decision-making in legally protected contexts. Although substantial research on algorithmic bias and discrimination has led to the development of fairness metrics, several critical legal issues remain unaddressed in practice. The paper addresses three key shortcomings in prevailing ML fairness paradigms: (1) the narrow reliance on prediction or outcome disparity as evidence for discrimination, (2) the lack of nuanced evaluation of estimation error and assumptions that the true causal structure and data-generating process are known, and (3) the overwhelming dominance of US-based analyses which has inadvertently fostered some misconceptions regarding lawful modelling practices in other jurisdictions. To address these gaps, we introduce a novel decision-theoretic framework grounded in anti-discrimination law of the United Kingdom, which has global influence and aligns closely with European and Commonwealth legal systems. We propose the "conditional estimation parity" metric, which accounts for estimation error and the underlying data-generating process, aligning with UK legal standards. We apply our formalism to a real-world algorithmic discrimination case, demonstrating how technical and legal reasoning can be aligned to detect and mitigate unlawful discrimination. Our contributions offer actionable, legally-grounded guidance for ML practitioners, policymakers, and legal scholars seeking to develop non-discriminatory automated decision systems that are legally robust.

View on arXiv
@article{sargeant2025_2407.00400,
  title={ Formalising Anti-Discrimination Law in Automated Decision Systems },
  author={ Holli Sargeant and Måns Magnusson },
  journal={arXiv preprint arXiv:2407.00400},
  year={ 2025 }
}
Comments on this paper