RCCDA: Adaptive Model Updates in the Presence of Concept Drift under a Constrained Resource Budget

Machine learning (ML) algorithms deployed in real-world environments are often faced with the challenge of adapting models to concept drift, where the task data distributions are shifting over time. The problem becomes even more difficult when model performance must be maintained under adherence to strict resource constraints. Existing solutions often depend on drift-detection methods that produce high computational overhead for resource-constrained environments, and fail to provide strict guarantees on resource usage or theoretical performance assurances. To address these shortcomings, we propose RCCDA: a dynamic model update policy that optimizes ML training dynamics while ensuring strict compliance to predefined resource constraints, utilizing only past loss information and a tunable drift threshold. In developing our policy, we analytically characterize the evolution of model loss under concept drift with arbitrary training update decisions. Integrating these results into a Lyapunov drift-plus-penalty framework produces a lightweight policy based on a measurable accumulated loss threshold that provably limits update frequency and cost. Experimental results on three domain generalization datasets demonstrate that our policy outperforms baseline methods in inference accuracy while adhering to strict resource constraints under several schedules of concept drift, making our solution uniquely suited for real-time ML deployments.
View on arXiv@article{piaseczny2025_2505.24149, title={ RCCDA: Adaptive Model Updates in the Presence of Concept Drift under a Constrained Resource Budget }, author={ Adam Piaseczny and Md Kamran Chowdhury Shisher and Shiqiang Wang and Christopher G. Brinton }, journal={arXiv preprint arXiv:2505.24149}, year={ 2025 } }