11
8

Convergence Rate of the (1+1)-Evolution Strategy with Success-Based Step-Size Adaptation on Convex Quadratic Functions

Abstract

The (1+1)-evolution strategy (ES) with success-based step-size adaptation is analyzed on a general convex quadratic function and its monotone transformation, that is, f(x)=g((xx)TH(xx))f(x) = g((x - x^*)^\mathrm{T} H (x - x^*)), where g:RRg:\mathbb{R}\to\mathbb{R} is a strictly increasing function, HH is a positive-definite symmetric matrix, and xRdx^* \in \mathbb{R}^d is the optimal solution of ff. The convergence rate, that is, the decrease rate of the distance from a search point mtm_t to the optimal solution xx^*, is proven to be in O(exp(L/Tr(H)))O(\exp( - L / \mathrm{Tr}(H) )), where LL is the smallest eigenvalue of HH and Tr(H)\mathrm{Tr}(H) is the trace of HH. This result generalizes the known rate of O(exp(1/d))O(\exp(- 1/d )) for the case of H=IdH = I_{d} (IdI_d is the identity matrix of dimension dd) and O(exp(1/(dξ)))O(\exp(- 1/ (d\cdot\xi) )) for the case of H=diag(ξId/2,Id/2)H = \mathrm{diag}(\xi \cdot I_{d/2}, I_{d/2}). To the best of our knowledge, this is the first study in which the convergence rate of the (1+1)-ES is derived explicitly and rigorously on a general convex quadratic function, which depicts the impact of the distribution of the eigenvalues in the Hessian HH on the optimization and not only the impact of the condition number of HH.

View on arXiv
Comments on this paper