Jared then continues his email with:
"...Associated, Jeff Blanchard, Ke Wei, and I have released a couple of papers with new algorithms for compressed sensing and matrix completion. The first paper is a nonlinear restarted conjugate gradient approach with hard thresholding, named CGIHT:
http://people.maths.ox.ac.uk/tanner/papers/BlTaWe_cgiht.pdf
The second paper is an algorithm for matrix completion,
http://people.maths.ox.ac.uk/tanner/papers/TaWei_ASD.pdf
The common thread linking these two papers is keeping the per iteration computational complexity very low so as to allow a more efficient search of subspaces, while also implicitly reverting to a higher order method once the correct subspace is identified. We find this approach highly favourable in computational experiments.
It would be greatly appreciated if you could mention these on Nuit Blanche.
All the best,
Jared
-------
Jared TannerProfessor of the Mathematics of Information
Mathematics Institute
University of Oxford
http://people.maths.ox.ac.uk/tanner
Sure Jared !
Here are the two papers; let us note the row sparse matrix case is also called the MMV problem:
Here are the two papers; let us note the row sparse matrix case is also called the MMV problem:
CGIHT: Conjugate Gradient Iterative Hard Thresholding for Compressed Sensing and Matrix Completion by Jeff Blanchard, Jared Tanner, Ke Wei
We introduce the Conjugate Gradient Iterative Hard Thresholding (CGIHT) family of algorithms for the efficient solution of constrained underdetermined linear systems of equations arising in compressed sensing, row sparse approximation, and matrix completion. CGIHT is designed to balance the low per iteration complexity of simple hard thresholding algorithms with the fast asymptotic convergence rate of employing the conjugate gradient method. We establish provable recovery guarantees and stability to noise for variants of CGIHT with sufficient conditions in terms of the restricted isometry constants of the sensing operators. Extensive empirical performance comparisons establish significant computational advantages for CGIHT both in terms of the size of problems which can be accurately approximated and in terms of overall computation time.
The phase transition shown above is similar to the one found by Jeff earlier ( Sharp Phase Transition for Joint Sparse Recovery (MMV) )
Matrix completion involves recovering a matrix from a subset of its entries by utilizing interdependency between the entries, typically through low rank structure. Despite the combinatorial nature of matrix completion, there are many computationally efficient algorithms which are effective for a broad class of matrices. In this paper, we introduce an alternating steepest descent algorithm (ASD) and a scaled variant, ScaledASD, for the fixed-rank matrix completion problem. Empirical evaluation of ASD and ScaledASD on both image in painting and random problems show they are competitive with other state-of-the-art matrix completion algorithms in terms of recoverable rank and overall computational time. In particular, their low per iteration computational complexity makes ASD and ScaledASD efficient for large size problems, especially when computing the solutions to moderate accuracy. A preliminary convergence analysis is also presented.
Ke Wei then let me know that
Dear Igor,Following Jared's email, the matlab codes for the matrix completion algorithms in the two paperscan be downloaded atMany thanks,Ke
Thanks Ke and Jared
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment