12
0

Self-Boost via Optimal Retraining: An Analysis via Approximate Message Passing

Abstract

Retraining a model using its own predictions together with the original, potentially noisy labels is a well-known strategy for improving the model performance. While prior works have demonstrated the benefits of specific heuristic retraining schemes, the question of how to optimally combine the model's predictions and the provided labels remains largely open. This paper addresses this fundamental question for binary classification tasks. We develop a principled framework based on approximate message passing (AMP) to analyze iterative retraining procedures for two ground truth settings: Gaussian mixture model (GMM) and generalized linear model (GLM). Our main contribution is the derivation of the Bayes optimal aggregator function to combine the current model's predictions and the given labels, which when used to retrain the same model, minimizes its prediction error. We also quantify the performance of this optimal retraining strategy over multiple rounds. We complement our theoretical results by proposing a practically usable version of the theoretically-optimal aggregator function for linear probing with the cross-entropy loss, and demonstrate its superiority over baseline methods in the high label noise regime.

View on arXiv
@article{javanmard2025_2505.15195,
  title={ Self-Boost via Optimal Retraining: An Analysis via Approximate Message Passing },
  author={ Adel Javanmard and Rudrajit Das and Alessandro Epasto and Vahab Mirrokni },
  journal={arXiv preprint arXiv:2505.15195},
  year={ 2025 }
}
Comments on this paper