Self-adjusting Population Sizes for the -EA on Monotone Functions

We study the -EA with mutation rate for , where the population size is adaptively controlled with the -success rule. Recently, Hevia Fajardo and Sudholt have shown that this setup with is efficient on \onemax for , but inefficient if . Surprisingly, the hardest part is not close to the optimum, but rather at linear distance. We show that this behavior is not specific to \onemax. If is small, then the algorithm is efficient on all monotone functions, and if is large, then it needs superpolynomial time on all monotone functions. In the former case, for we show a upper bound for the number of generations and for the number of function evaluations, and for we show generations and evaluations. We also show formally that optimization is always fast, regardless of , if the algorithm starts in proximity of the optimum. All results also hold in a dynamic environment where the fitness function changes in each generation.
View on arXiv