Sparsistency and Rates of Convergence in Large Covariance Matrices Estimation

This paper studies the sparsistency, rates of convergence, and asymptotic normality for estimating sparse covariance matrices based on penalized likelihood with non-concave penalty functions. Here, sparsistency refers to the property that all parameters that are zero are actually estimated as zero with probability tending to one. Depending on the case of applications, sparsity {\em priori} may occur on the covariance matrix, or its inverse or its Cholesky decomposition. We study these three sparsity exploration problems under a unified framework with a general penalty function. We show that the rates of convergence for these problems under the Frobenius norm are of order , where is the number of nonsparse elements, is the size of the covariance matrix and is the sample size. This explicitly spells out the contribution of high-dimensionality is merely of a logarithmic factor. The biases of the estimators using different penalty functions are explicitly obtained. As a result, for the -penalty, to obtain the sparsistency and optimal rate of convergence, the non-sparsity rates must be low: among parameters, for estimating sparse covariance matrix, or sparse precision matrix or sparse Cholesky factor and for estimating sparse correlation matrix or its inverse, where is the number of the non-sparse elements on the off-diagonal entries. On the other hand, using the SCAD or hard-thresholding penalty functions, there are no such a restriction.
View on arXiv