This research investigates how to determine whether two rankings come from the same distribution. We evaluate three hybrid tests: Wilcoxon's, Dietterich's, and Alpaydin's statistical tests combined with cross-validation (CV), each operating with folds ranging from 5 to 10, thus altogether 18 variants. We have applied these tests in the framework of a popular comparative statistical test, the Sum of Ranking Differences that builds upon the Manhattan distance between the rankings. The introduced methodology is widely applicable from machine learning through social sciences. To compare these methods, we have followed an innovative approach borrowed from Economics. We designed nine scenarios for testing type I and II errors. These represent typical situations (that is, different data structures) that CV tests face routinely. The optimal CV method depends on the preferences regarding the minimization of type I/II errors, size of the input, and expected patterns in the data. The Wilcoxon method with eight folds proved to be the best for all three investigated input sizes. Although the Dietterich and Alpaydin methods are the best in type I situations, they fail badly in type II cases. We demonstrate our results on real-world data, borrowed from chess and chemistry. Overall we cannot recommend either Alpaydin or Dietterich as an alternative to Wilcoxon cross-validation.
View on arXiv