We study the problem of learning multivariate log-concave densities with respect to a global loss function. We obtain the first upper bound on the sample complexity of the maximum likelihood estimator (MLE) for a log-concave density on , for all . Prior to this work, no finite sample upper bound was known for this estimator in more than dimensions. In more detail, we prove that for any and , given samples drawn from an unknown log-concave density on , the MLE outputs a hypothesis that with high probability is -close to , in squared Hellinger loss. A sample complexity lower bound of was previously known for any learning algorithm that achieves this guarantee. We thus establish that the sample complexity of the log-concave MLE is near-optimal, up to an factor.
View on arXiv