14
32

Near-optimal sample complexity for convex tensor completion

Abstract

We analyze low rank tensor completion (TC) using noisy measurements of a subset of the tensor. Assuming a rank-rr, order-dd, N×N××NN \times N \times \cdots \times N tensor where r=O(1)r=O(1), the best sampling complexity that was achieved is O(Nd2)O(N^{\frac{d}{2}}), which is obtained by solving a tensor nuclear-norm minimization problem. However, this bound is significantly larger than the number of free variables in a low rank tensor which is O(dN)O(dN). In this paper, we show that by using an atomic-norm whose atoms are rank-11 sign tensors, one can obtain a sample complexity of O(dN)O(dN). Moreover, we generalize the matrix max-norm definition to tensors, which results in a max-quasi-norm (max-qnorm) whose unit ball has small Rademacher complexity. We prove that solving a constrained least squares estimation using either the convex atomic-norm or the nonconvex max-qnorm results in optimal sample complexity for the problem of low-rank tensor completion. Furthermore, we show that these bounds are nearly minimax rate-optimal. We also provide promising numerical results for max-qnorm constrained tensor completion, showing improved recovery results compared to matricization and alternating least squares.

View on arXiv
Comments on this paper