ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.07745
101
7
v1v2 (latest)

Predictive Churn with the Set of Good Models

12 February 2024
J. Watson-Daniels
Flavio du Pin Calmon
Alexander DÁmour
Carol Xuan Long
David C. Parkes
Berk Ustun
ArXiv (abs)PDFHTML
Main:11 Pages
4 Figures
Bibliography:7 Pages
4 Tables
Appendix:8 Pages
Abstract

Machine learning models in modern mass-market applications are often updated over time. One of the foremost challenges faced is that, despite increasing overall performance, these updates may flip specific model predictions in unpredictable ways. In practice, researchers quantify the number of unstable predictions between models pre and post update -- i.e., predictive churn. In this paper, we study this effect through the lens of predictive multiplicity -- i.e., the prevalence of conflicting predictions over the set of near-optimal models (the Rashomon set). We show how traditional measures of predictive multiplicity can be used to examine expected churn over this set of prospective models -- i.e., the set of models that may be used to replace a baseline model in deployment. We present theoretical results on the expected churn between models within the Rashomon set from different perspectives. And we characterize expected churn over model updates via the Rashomon set, pairing our analysis with empirical results on real-world datasets -- showing how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications. Further, we show that our approach is useful even for models enhanced with uncertainty awareness.

View on arXiv
@article{watson-daniels2025_2402.07745,
  title={ Predictive Churn with the Set of Good Models },
  author={ Jamelle Watson-Daniels and Flavio du Pin Calmon and Alexander DÁmour and Carol Long and David C. Parkes and Berk Ustun },
  journal={arXiv preprint arXiv:2402.07745},
  year={ 2025 }
}
Comments on this paper