The use of cross validation in the analysis of designed experiments

Cross-validation (CV) is a common method to tune machine learning methods and can be used for model selection in regression as well. Because of the structured nature of small, traditional experimental designs, the literature has warned against using CV in their analysis. The striking increase in the use of machine learning, and thus CV, in the analysis of experimental designs, has led us to empirically study the effectiveness of CV compared to other methods of selecting models in designed experiments, including the little bootstrap. We consider both response surface settings where prediction is of primary interest, as well as screening where factor selection is most important. Overall, we provide evidence that the use of leave-one-out cross-validation (LOOCV) in the analysis of small, structured is often useful. More general -fold CV may also be competitive but its performance is uneven.
View on arXiv@article{weese2025_2506.14593, title={ The use of cross validation in the analysis of designed experiments }, author={ Maria L. Weese and Byran J. Smucker and David J. Edwards }, journal={arXiv preprint arXiv:2506.14593}, year={ 2025 } }