ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.21284
52
0

Controlled Model Debiasing through Minimal and Interpretable Updates

28 February 2025
Federico Di Gennaro
Thibault Laugel
Vincent Grari
Marcin Detyniecki
    FaML
ArXivPDFHTML
Abstract

Traditional approaches to learning fair machine learning models often require rebuilding models from scratch, generally without accounting for potentially existing previous models. In a context where models need to be retrained frequently, this can lead to inconsistent model updates, as well as redundant and costly validation testing. To address this limitation, we introduce the notion of controlled model debiasing, a novel supervised learning task relying on two desiderata: that the differences between new fair model and the existing one should be (i) interpretable and (ii) minimal. After providing theoretical guarantees to this new problem, we introduce a novel algorithm for algorithmic fairness, COMMOD, that is both model-agnostic and does not require the sensitive attribute at test time. In addition, our algorithm is explicitly designed to enforce minimal and interpretable changes between biased and debiased predictions -a property that, while highly desirable in high-stakes applications, is rarely prioritized as an explicit objective in fairness literature. Our approach combines a concept-based architecture and adversarial learning and we demonstrate through empirical results that it achieves comparable performance to state-of-the-art debiasing methods while performing minimal and interpretable prediction changes.

View on arXiv
@article{gennaro2025_2502.21284,
  title={ Controlled Model Debiasing through Minimal and Interpretable Updates },
  author={ Federico Di Gennaro and Thibault Laugel and Vincent Grari and Marcin Detyniecki },
  journal={arXiv preprint arXiv:2502.21284},
  year={ 2025 }
}
Comments on this paper