We propose a novel framework for the deterministic construction of linear, near-isometric embeddings of a nite set of data points. Given a set of training points X R N, we consider the secant set S(X ) that consists of all pairwise di erence vectors of X , normalized to lie on the unit sphere. We formulate an a ne rank minimization problem to construct a matrix that preserves the norms of all the vectors in S(X ) up to a distortion parameter .While a ne rank minimization is NP-hard, we show that this problem can be relaxed toa convex formulation that can be solved using a tractable semide nite program (SDP). In order to enable scalability of our proposed SDP to very large-scale problems, we adopt a two-stage approach. First, in order to reduce compute time, we develop a novel algorithm based on the Alternating Direction Method of Multipliers (ADMM) that we call Nuclear norm minimization with Max-norm constraints (NuMax) to solve the SDP. Second, we develop a greedy, approximate version of NuMax based on the column generation method commonly used to solve large-scale linear programs. We demonstrate that our framework is useful for a number of applications in machine learning and signal processing via a range of experiments on large-scale synthetic and real datasets.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.