30
2

Simple Hyper-heuristics Optimise LeadingOnes in the Best Runtime Achievable Using Randomised Local Search Low-Level Heuristics

Abstract

Selection hyper-heuristics are randomised search methodologies which choose and execute heuristics from a set of low-level heuristics. Recent research for the LeadingOnes benchmark function has shown that the standard Simple Random, Permutation, Random Gradient, Greedy and Reinforcement Learning selection mechanisms show no effects of learning. The idea behind the learning mechanisms is to continue to exploit the currently selected heuristic as long as it is successful. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. In this paper we generalise the `simple' selection-perturbation mechanisms so success can be measured over some fixed period of time τ\tau, rather than in a single iteration. For LeadingOnes we prove that the Generalised Random Gradient hyper-heuristic has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower order terms. The performance of the hyper-heuristic improves as the number of low-level heuristics to choose from increases. In particular, with access to kk low-level heuristics, it outperforms the best-possible algorithm using less than kk. Experimental analyses confirm these results for different problem sizes (up to n=108n=10^8) and shed some light on the best choices of the parameter τ\tau in various situations.

View on arXiv
Comments on this paper