Adaptation in log-concave density estimation

The log-concave maximum likelihood estimator of a density on the real line based on a sample of size is known to attain the minimax optimal rate of convergence of with respect to, e.g., squared Hellinger distance. In this paper, we show that it also enjoys attractive adaptation properties, in the sense that it achieves a faster rate of convergence when the logarithm of the true density is -affine (i.e.\ made up of affine pieces), provided is not too large. Our results use two different techniques: the first relies on a new Marshall's inequality for log-concave density estimation, and reveals that when the true density is close to log-linear on its support, the log-concave maximum likelihood estimator can achieve the parametric rate of convergence in total variation distance. Our second approach depends on local bracketing entropy methods, and allows us to prove a sharp oracle inequality, which implies in particular that the rate of convergence with respect to various global loss functions, including Kullback--Leibler divergence, is when the true density is log-concave and its logarithm is close to -affine.
View on arXiv