31
4

Practical Algorithms for Learning Near-Isometric Linear Embeddings

Jerry Luo
Kayla Shapiro
Hao-Jun Michael Shi
Qi Yang
Abstract

We propose two practical non-convex approaches for learning near-isometric, linear embeddings of finite sets of data points. Following Hegde, et. al. [6], given a set of training points X\mathcal{X}, we consider the secant set S(X)S(\mathcal{X}) that consists of all pairwise difference vectors of X\mathcal{X}, normalized to lie on the unit sphere. The problem can be formulated as finding a symmetric and positive semi-definite matrix ψ\psi that preserves the norms of all the vectors in S(X)S(\mathcal{X}) up to a distortion parameter δ\delta. Motivated by non-negative matrix factorization, we reformulate our problem into a Frobenius norm minimization problem, which is solved by the Alternating Direction Method of Multipliers (ADMM) and develop an algorithm, FroMax. Another method solves for a projection matrix ψ\psi by minimizing the restricted isometry property (RIP) directly over the set of symmetric, positive semi-definite matrices. Applying ADMM and a Moreau decomposition on a proximal mapping, we develop another algorithm, NILE-Pro, for dimensionality reduction. Both non-convex approaches are then empirically demonstrated to be more computationally efficient than prior convex approaches for a number of applications in machine learning and signal processing.

View on arXiv
Comments on this paper