ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.07530
43
1

Adaptive approximation of monotone functions

14 September 2023
Pierre Gaillard
Sébastien Gerchinovitz
Étienne de Montbrun
ArXiv (abs)PDFHTML
Abstract

We study the classical problem of approximating a non-decreasing function f:X→Yf: \mathcal{X} \to \mathcal{Y}f:X→Y in Lp(μ)L^p(\mu)Lp(μ) norm by sequentially querying its values, for known compact real intervals X\mathcal{X}X, Y\mathcal{Y}Y and a known probability measure μ\muμ on \cX\cX\cX. For any function~fff we characterize the minimum number of evaluations of fff that algorithms need to guarantee an approximation f^\hat{f}f^​ with an Lp(μ)L^p(\mu)Lp(μ) error below ϵ\epsilonϵ after stopping. Unlike worst-case results that hold uniformly over all fff, our complexity measure is dependent on each specific function fff. To address this problem, we introduce GreedyBox, a generalization of an algorithm originally proposed by Novak (1992) for numerical integration. We prove that GreedyBox achieves an optimal sample complexity for any function fff, up to logarithmic factors. Additionally, we uncover results regarding piecewise-smooth functions. Perhaps as expected, the Lp(μ)L^p(\mu)Lp(μ) error of GreedyBox decreases much faster for piecewise-C2C^2C2 functions than predicted by the algorithm (without any knowledge on the smoothness of fff). A simple modification even achieves optimal minimax approximation rates for such functions, which we compute explicitly. In particular, our findings highlight multiple performance gaps between adaptive and non-adaptive algorithms, smooth and piecewise-smooth functions, as well as monotone or non-monotone functions. Finally, we provide numerical experiments to support our theoretical results.

View on arXiv
Comments on this paper