Pages

Sunday, August 23, 2009

CS: Phase transitions for greedy sparse approximation algorithms, Exponential bounds implying construction of CS mat, Fast Bayesian Matching Pursuit


Jared Tanner just sent me this:
Two new compressed sensing manuscripts are available. Items 1 and 6 on my webpage publication list: http://www.maths.ed.ac.uk/~tanner/
Item 6 was submitted about a year ago, but I've only recently posted it online; it gives finite dimensional bounds for Gaussian matrices and l1-regularization. Item 1 is a new paper whose results which quantifies what has actually been proven about CoSaMP, Subspace Pursuit and Iterated Hard Thresholding. Rather than saying something like "the algorithm work if RIP constant with sparsity 4k is less than 1/10 then the algorithm work" we cast the results in terms of the phase transition framework using Gaussian matrices.

Here is item 1: Phase transitions for greedy sparse approximation algorithms by Jeffrey Blanchard, Coralia Cartis, Jared Tanner and Andrew Thompson. The abstract reads:

A major enterprise in compressed sensing and sparse approximation is the design and analysis of computationally tractable algorithms for recovering sparse, exact or approximate, solutions of underdetermined linear systems of equations. Many such algorithms have now been proven using the ubiquitous Restricted Isometry Property (RIP) [9] to have optimal-order uniform recovery guarantees. However, it is unclear when the RIP-based sufficient conditions on the algorithm are satisfied. We present a framework in which this task can be achieved; translating these conditions for Gaussian measurement matrices into requirements on the signal's sparsity level, size and number of measurements. We illustrate this approach on three of the state-of-the-art greedy algorithms: CoSaMP [27], Subspace Pursuit (SP) [11] and Iterated Hard Thresholding (IHT) [6]. Designed to allow a direct comparison of existing theory, our framework implies that IHT, the lowest of the three in computational cost, also requires fewer compressed sensing measurements than CoSaMP and SP.

and item 6: Exponential bounds implying construction of compressed sensing matrices, error-correcting codes and neighborly polytopes by random sampling. by Jared Tanner and David L. Donoho. The abstract reads:

In [12] the authors proved an asymptotic sampling theorem for sparse signals, showing that n random measurements permit to reconstruct an N-vector having k nonzeros provided
n \gt 2 · k · log(N/n)(1 + o(1));
reconstruction uses ℓ1 minimization. They also proved an asymptotic rate theorem, showing existence of real error-correcting codes for messages of length N which can correct all possible k-element error patterns using just n generalized checksum bits, where
n \gt 2e · k log(N/n)(1 + o(1));
decoding uses ℓ1 minimization. Both results require an asymptotic framework, with N growing large. For applications, on the other hand, we are concerned with specific triples k,n, N. We exhibit triples (k, n,N) for which Compressed Sensing Matrices and Real Error-Correcting Codes surely exist and can be obtained with high probability by random sampling. These derive from exponential bounds on the probability of drawing ‘bad’ matrices. The bounds give conditions effective at finite-N, and converging to the known sharp asymptotic conditions for large N. Compared to other finite-N bounds known to us, they are much stronger, and much more explicit.
Our bounds derive from asymptotics in [12] counting the expected number of k-dimensional faces of the randomly projected simplex TN−1 and cross-polytope CN. We develop here finite-N bounds on the expected discrepancy between the number of k-faces of the projected polytope AQ and its generator Q, for Q = TN−1 and CN.
Our bounds also imply existence of interesting geometric objects. Thus, we exhibit triples (k, n,N) for which polytopes with 2N vertices can be centrally k-neighborly.

Thanks Jared for the heads-up!

Philip Schniter, Lee C. Potter, Justin Ziniel just made available a new version of Fast Bayesian Matching Pursuit code. From the website:

An updated version 1.2 software package has been made available for those who would like to explore how the algorithm works in MATLAB. The updated version now includes support for complex-valued signals and noise.


PS: I am still away.

No comments:

Post a Comment