17
0

Meta Sparse Principal Component Analysis

Abstract

We study the meta-learning for support (i.e. the set of non-zero entries) recovery in high-dimensional Principal Component Analysis. We reduce the sufficient sample complexity in a novel task with the information that is learned from auxiliary tasks. We assume each task to be a different random Principal Component (PC) matrix with a possibly different support and that the support union of the PC matrices is small. We then pool the data from all the tasks to execute an improper estimation of a single PC matrix by maximising the l1l_1-regularised predictive covariance to establish that with high probability the true support union can be recovered provided a sufficient number of tasks mm and a sufficient number of samples O(log(p)m) O\left(\frac{\log(p)}{m}\right) for each task, for pp-dimensional vectors. Then, for a novel task, we prove that the maximisation of the l1l_1-regularised predictive covariance with the additional constraint that the support is a subset of the estimated support union could reduce the sufficient sample complexity of successful support recovery to O(logJ)O(\log |J|), where JJ is the support union recovered from the auxiliary tasks. Typically, J|J| would be much less than pp for sparse matrices. Finally, we demonstrate the validity of our experiments through numerical simulations.

View on arXiv
Comments on this paper