ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.00978
11
1

Certified Multi-Fidelity Zeroth-Order Optimization

2 August 2023
Étienne de Montbrun
Sébastien Gerchinovitz
ArXivPDFHTML
Abstract

We consider the problem of multi-fidelity zeroth-order optimization, where one can evaluate a function fff at various approximation levels (of varying costs), and the goal is to optimize fff with the cheapest evaluations possible. In this paper, we study \emph{certified} algorithms, which are additionally required to output a data-driven upper bound on the optimization error. We first formalize the problem in terms of a min-max game between an algorithm and an evaluation environment. We then propose a certified variant of the MFDOO algorithm and derive a bound on its cost complexity for any Lipschitz function fff. We also prove an fff-dependent lower bound showing that this algorithm has a near-optimal cost complexity. We close the paper by addressing the special case of noisy (stochastic) evaluations as a direct example.

View on arXiv
Comments on this paper