Pages

Wednesday, March 21, 2012

Matrix ALPS (implementation)



Volkan just sent me the following:
Hi Igor, Our matrix ALPS algorithms are also online now. Please check out http://lions.epfl.ch/MALPS. best, Volkan
From the webpage.

We tackle the linear inverse problems revolving around low-rank matrices by preserving their non-convex structure. To this end, we present and analyze Matrix ALPS, a new set of low-rank recovery algorithms within the class of hard thresholding methods. We provide strategies on how to set up these algorithms via basic ``ingredients'' for different configurations to achieve complexity vs. accuracy tradeoffs. Moreover, we propose acceleration schemes by utilizing memory-based techniques and randomized, ε-approximate, low-rank projections to speed-up the convergence as well as decrease the computational costs in the recovery process. For all these cases, we present theoretical analysis that guarantees convergence under mild problem conditions.

The algorithm is presented in "Matrix Recipes for Hard Thresholding Methods", Technical Report, by Anastasios Kyrillidis and Volkan Cevher.
Moreover, we propose Matrix ALPS for recovering a sparse plus low-rank decomposition of a matrix given its corrupted and incomplete linear measurements. Our approach is a first-order projected gradient method over non-convex sets, and it exploits a well-known memory-based acceleration technique. We theoretically characterize the convergence properties of M ATRIX ALPS using the stable embedding properties of the linear measurement operator. We then numerically illustrate that our algorithm outperforms the existing convex as well as non-convex state-of-the-art algorithms in computational efficiency without sacrificing stability.

An implementation can be donwloaded here and will be featured in the Matrix Factorization Jungle.

The reports are: Matrix Recipes for Hard Thresholding Methods by Anastasios Kyrillidis and Volkan Cevher. The abstract reads:
Given a set of possibly corrupted and incomplete linear measurements, we leverage low-dimensional models to best explain the data for provable solution quality in inversion. A non-exhaustive list of examples includes sparse vector and low-rank matrix approximation. Most of the well-known low dimensional models are inherently non-convex. However, recent approaches prefer convex surrogates that “relax” the problem in order to establish solution uniqueness and stability. In this paper, we tackle the linear inverse problems revolving around low-rank matrices by preserving their non-convex structure. To this end, we present and analyze a new set of sparse and low-rank recovery algorithms within the class of hard thresholding methods. We provide strategies on how to set up these algorithms via basic “ingredients” for different configurations to achieve complexity vs. accuracy tradeoffs. Moreover, we propose acceleration schemes by utilizing memory-based techniques and randomized, ε-approximate, low-rank projections to speed-up the convergence as well as decrease the computational costs in the recovery process. For all these cases, we present theoretical analysis that guarantees convergence under mild problem conditions. Simulation results demonstrate notable performance improvements compared to state-of-the-art algorithms both in terms of data reconstruction and computational complexity.

We propose MATRIX ALPS for recovering a sparse plus low-rank decomposition of a matrix given its corrupted and incomplete lin- ear measurements. Our approach is a first-order projected gradient method over non-convex sets, and it exploits a well-known memory- based acceleration technique. We theoretically characterize the convergence properties of MATRIX ALPS using the stable embedding properties of the linear measurement operator. We then numerically illustrate that our algorithm outperforms the existing convex as well as non-convex state-of-the-art algorithms in computational efficiency without sacrificing stability.


Thanks Volkan !

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment