Just in time as Sparse Measurement Matrices are of potential use in Genomics, Jared Tanner just sent me the following:
Dear Igor,...Thought it worth mentioning a new paper of Bubacarr Bah and I. You may recall that manuscript "Combining geometry and combinatorics: a unified approach to sparse signal recovery" by Berinde, Gilbert, Indyk, Karloff, and Strauss [IC: presentation here]. Their manuscript, they considered an l1-norm variant of the RIP, showed that can be used to give recovery guarantees for l1-minimizatin, and showed that the l1-norm RIP is quite different in character form the traditional l2-norm RIP. In particular, sparse and non-mean zero matrices can be used. This has been further extended by a number of authors, introducing algorithms specifically for such sparse matrices, often analyzed using the l1-norm RIP.Bubacarr Bah and I have worked out some large deviation bounds on l1-RIP bounds for matrices with a fixed number of equal magnitude nonzeros per column. The bounds are sufficiently accurate that we can accurately characterize the tail bounds, but of course there is likely quite a bit of slack from the union bound. Even so, the resulting phase transition that Berinde et al. above proved for l1-minimization using l1-RIP ends up being higher than the best known l2-RIP phase transition for dense Gaussian matrices; and this l1-RIP proof is valid for sensing matrices with a fixed number of ones per column and all other entries set to zero. In fairness, the results are far below what the polytope approach gives, but we believe these are the most accurate estimates known for non-mean zero matrices.The manuscript is available at:All the best,Jared
Thanks Jared, here is the paper: Vanishingly Sparse Matrices and Expander Graphs, with application to compressed sensing by Bubacarr Bah, and Jared Tanner. The abstract reads:
We consider a probabilistic construction of sparse random matrices where each column has a ﬁxed number of nonzeros whose row indices are drawn uniformly at random. These matrices have a one-to-one correspondence with the adjacency matrices of ﬁxed left degree expander graphs. We present formulae for the expected cardinality of the set of neighbors for these graphs, and present tail bounds on the probability that this cardinality will be less than the expected value. Deducible from these bounds are similar bounds for the expansion of the graph which is of interest in many applications. These bounds are derived through a more detailed analysis of collisions in unions of sets using a dyadic splitting technique. The exponential tail bounds yield the best known bounds for the ℓ1 norm of large rectangular matrices when restricted to act on vectors with few nonzeros; this quantity is referred to as the ℓ1 norm restricted isometry constants (RIC1) of the matrix. These bounds allow for quantitative theorems on existence of expander graphs and hence the sparse random matrices we consider and also quantitative compressed sensing sampling theorems when using sparse non mean-zero measurement matrices.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.