27
4
v1v2 (latest)

Practical Algorithms for Learning Near-Isometric Linear Embeddings

Jerry Luo
Kayla Shapiro
Hao-Jun Michael Shi
Qi Yang
Abstract

We propose two practical non-convex approaches for learning near-isometric, linear embeddings of finite sets of data points. Given a set of training points X\mathcal{X}, we consider the secant set S(X)S(\mathcal{X}) that consists of all pairwise difference vectors of X\mathcal{X}, normalized to lie on the unit sphere. The problem can be formulated as finding a symmetric and positive semi-definite matrix Ψ\boldsymbol{\Psi} that preserves the norms of all the vectors in S(X)S(\mathcal{X}) up to a distortion parameter δ\delta. Motivated by non-negative matrix factorization, we reformulate our problem into a Frobenius norm minimization problem, which is solved by the Alternating Direction Method of Multipliers (ADMM) and develop an algorithm, FroMax. Another method solves for a projection matrix Ψ\boldsymbol{\Psi} by minimizing the restricted isometry property (RIP) directly over the set of symmetric, postive semi-definite matrices. Applying ADMM and a Moreau decomposition on a proximal mapping, we develop another algorithm, NILE-Pro, for dimensionality reduction. FroMax is shown to converge faster for smaller δ\delta while NILE-Pro converges faster for larger δ\delta. Both non-convex approaches are then empirically demonstrated to be more computationally efficient than prior convex approaches for a number of applications in machine learning and signal processing.

View on arXiv
Comments on this paper