30
2

Simple Hyper-heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes

Abstract

Selection HHs are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this paper we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the LeadingOnes function. Our analysis shows that the standard Simple Random, Permutation, Greedy and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the simple Random Gradient HH so success can be measured over a fixed period of time tau, instead of a single iteration. For LO we prove that the Generalised Random Gradient HH can learn to adapt the neighbourhood size of RLS to optimality during the run. We prove it has the best possible performance achievable with the low-level heuristics. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. Finally, we show that the advantages of GRG over RLS and EAs using standard bit mutation increase if the anytime performance is considered. Experimental analyses confirm these results for different problem sizes.

View on arXiv
Comments on this paper