21
45

Adaptation in log-concave density estimation

Abstract

The log-concave maximum likelihood estimator of a density on the real line based on a sample of size nn is known to attain the minimax optimal rate of convergence of O(n4/5)O(n^{-4/5}) with respect to, e.g., squared Hellinger distance. In this paper, we show that it also enjoys attractive adaptation properties, in the sense that it achieves a faster rate of convergence when the logarithm of the true density is kk-affine (i.e.\ made up of kk affine pieces), provided kk is not too large. Our results use two different techniques: the first relies on a new Marshall's inequality for log-concave density estimation, and reveals that when the true density is close to log-linear on its support, the log-concave maximum likelihood estimator can achieve the parametric rate of convergence in total variation distance. Our second approach depends on local bracketing entropy methods, and allows us to prove a sharp oracle inequality, which implies in particular that the rate of convergence with respect to various global loss functions, including Kullback--Leibler divergence, is O(knlog5/4n)O\bigl(\frac{k}{n}\log^{5/4} n\bigr) when the true density is log-concave and its logarithm is close to kk-affine.

View on arXiv
Comments on this paper