ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.05315
16
25

Optimality of Maximum Likelihood for Log-Concave Density Estimation and Bounded Convex Regression

13 March 2019
Gil Kur
Y. Dagan
Alexander Rakhlin
ArXivPDFHTML
Abstract

In this paper, we study two problems: (1) estimation of a ddd-dimensional log-concave distribution and (2) bounded multivariate convex regression with random design with an underlying log-concave density or a compactly supported distribution with a continuous density. First, we show that for all d≥4d \ge 4d≥4 the maximum likelihood estimators of both problems achieve an optimal risk of Θd(n−2/(d+1))\Theta_d(n^{-2/(d+1)})Θd​(n−2/(d+1)) (up to a logarithmic factor) in terms of squared Hellinger distance and L2L_2L2​ squared distance, respectively. Previously, the optimality of both these estimators was known only for d≤3d\le 3d≤3. We also prove that the ϵ\epsilonϵ-entropy numbers of the two aforementioned families are equal up to logarithmic factors. We complement these results by proving a sharp bound Θd(n−2/(d+4))\Theta_d(n^{-2/(d+4)})Θd​(n−2/(d+4)) on the minimax rate (up to logarithmic factors) with respect to the total variation distance. Finally, we prove that estimating a log-concave density - even a uniform distribution on a convex set - up to a fixed accuracy requires the number of samples \emph{at least} exponential in the dimension. We do that by improving the dimensional constant in the best known lower bound for the minimax rate from 2−d⋅n−2/(d+1)2^{-d}\cdot n^{-2/(d+1)}2−d⋅n−2/(d+1) to c⋅n−2/(d+1)c\cdot n^{-2/(d+1)}c⋅n−2/(d+1) (when d≥2d\geq 2d≥2).

View on arXiv
Comments on this paper