ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.19871
25
0

Towards Stable Machine Learning Model Retraining via Slowly Varying Sequences

28 March 2024
Dimitris Bertsimas
V. Digalakis
Yu Ma
Phevos Paschalidis
ArXivPDFHTML
Abstract

We consider the problem of retraining machine learning (ML) models when new batches of data become available. Existing approaches greedily optimize for predictive power independently at each batch, without considering the stability of the model's structure or analytical insights across retraining iterations. We propose a model-agnostic framework for finding sequences of models that are stable across retraining iterations. We develop a mixed-integer optimization formulation that is guaranteed to recover Pareto optimal models (in terms of the predictive power-stability trade-off) with good generalization properties, as well as an efficient polynomial-time algorithm that performs well in practice. We focus on retaining consistent analytical insights-which is important to model interpretability, ease of implementation, and fostering trust with users-by using custom-defined distance metrics that can be directly incorporated into the optimization problem. We evaluate our framework across models (regression, decision trees, boosted trees, and neural networks) and application domains (healthcare, vision, and language), including deployment in a production pipeline at a major US hospital. We find that, on average, a 2% reduction in predictive power leads to a 30% improvement in stability.

View on arXiv
@article{bertsimas2025_2403.19871,
  title={ Towards Stable Machine Learning Model Retraining via Slowly Varying Sequences },
  author={ Dimitris Bertsimas and Vassilis Digalakis Jr and Yu Ma and Phevos Paschalidis },
  journal={arXiv preprint arXiv:2403.19871},
  year={ 2025 }
}
Comments on this paper