53
0

Quantifying Overfitting along the Regularization Path for Two-Part-Code MDL in Supervised Classification

Abstract

We provide a complete characterization of the entire regularization curve of a modified two-part-code Minimum Description Length (MDL) learning rule for binary classification, based on an arbitrary prior or description language. Grunwald and Langford [2004] previously established the lack of asymptotic consistency, from an agnostic PAC (frequentist worst case) perspective, of the MDL rule with a penalty parameter of λ=1\lambda=1, suggesting that it underegularizes. Driven by interest in understanding how benign or catastrophic under-regularization and overfitting might be, we obtain a precise quantitative description of the worst case limiting error as a function of the regularization parameter λ\lambda and noise level (or approximation error), significantly tightening the analysis of Grunwald and Langford for λ=1\lambda=1 and extending it to all other choices of λ\lambda.

View on arXiv
@article{zhu2025_2503.02110,
  title={ Quantifying Overfitting along the Regularization Path for Two-Part-Code MDL in Supervised Classification },
  author={ Xiaohan Zhu and Nathan Srebro },
  journal={arXiv preprint arXiv:2503.02110},
  year={ 2025 }
}
Comments on this paper