ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.07438
20
2

Hardest Monotone Functions for Evolutionary Algorithms

13 November 2023
Marc Kaufmann
Maxime Larcher
Johannes Lengler
Oliver Sieberling
ArXivPDFHTML
Abstract

The study of hardest and easiest fitness landscapes is an active area of research. Recently, Kaufmann, Larcher, Lengler and Zou conjectured that for the self-adjusting (1,λ)(1,\lambda)(1,λ)-EA, Adversarial Dynamic BinVal (ADBV) is the hardest dynamic monotone function to optimize. We introduce the function Switching Dynamic BinVal (SDBV) which coincides with ADBV whenever the number of remaining zeros in the search point is strictly less than n/2n/2n/2, where nnn denotes the dimension of the search space. We show, using a combinatorial argument, that for the (1+1)(1+1)(1+1)-EA with any mutation rate p∈[0,1]p \in [0,1]p∈[0,1], SDBV is drift-minimizing among the class of dynamic monotone functions. Our construction provides the first explicit example of an instance of the partially-ordered evolutionary algorithm (PO-EA) model with parameterized pessimism introduced by Colin, Doerr and F\érey, building on work of Jansen. We further show that the (1+1)(1+1)(1+1)-EA optimizes SDBV in Θ(n3/2)\Theta(n^{3/2})Θ(n3/2) generations. Our simulations demonstrate matching runtimes for both static and self-adjusting (1,λ)(1,\lambda)(1,λ) and (1+λ)(1+\lambda)(1+λ)-EA. We further show, using an example of fixed dimension, that drift-minimization does not equal maximal runtime.

View on arXiv
Comments on this paper