Meta Sparse Principal Component Analysis

We study the meta-learning for support (i.e. the set of non-zero entries) recovery in high-dimensional Principal Component Analysis. We reduce the sufficient sample complexity in a novel task with the information that is learned from auxiliary tasks. We assume each task to be a different random Principal Component (PC) matrix with a possibly different support and that the support union of the PC matrices is small. We then pool the data from all the tasks to execute an improper estimation of a single PC matrix by maximising the -regularised predictive covariance to establish that with high probability the true support union can be recovered provided a sufficient number of tasks and a sufficient number of samples for each task, for -dimensional vectors. Then, for a novel task, we prove that the maximisation of the -regularised predictive covariance with the additional constraint that the support is a subset of the estimated support union could reduce the sufficient sample complexity of successful support recovery to , where is the support union recovered from the auxiliary tasks. Typically, would be much less than for sparse matrices. Finally, we demonstrate the validity of our experiments through numerical simulations.
View on arXiv