Pages

Tuesday, February 11, 2014

Model-based Sketching and Recovery with Expanders / For-all Sparse Recovery in Near-Optimal Time

Expanders have been the basis for sparse measurement matrices ever since these experiments and attendant papers. It is a pretty fascinating subject as people are trying to push the limits further to see if there is a way to decrease further down the number of measurements. Today we have two papers on the subject. From the first paper:  

The recent work of [28] is the first to address model-based CS with sparse measurement matrices and in the for all case. They show that model-expanders exist when the structures (models) considered are binary trees and non-overlapping block structures and they provided bounds for the sketch sizes. They also propose a recovery algorithm that has exponential time complexity with respect to the dimension of the signal.
So the following paper seems to have found a better and simple algorithm to do this faster (with a surprising result see figure)




Model-based Sketching and Recovery with Expanders by Bubacarr Bah, Luca Baldassarre, Volkan Cevher

Linear sketching and recovery of sparse vectors with randomly constructed sparse matrices has numerous applications in several areas, including compressive sensing, data stream computing, graph sketching, and combinatorial group testing. This paper considers the same problem with the added twist that the sparse coefficients of the unknown vector exhibit further correlations as determined by a known sparsity model. We prove that exploiting model-based sparsity in recovery provably reduces the sketch size without sacrificing recovery quality. In this context, we present the model-expander iterative hard thresholding algorithm for recovering model sparse signals from linear sketches obtained via sparse adjacency matrices of expander graphs with rigorous performance guarantees. The main computational cost of our algorithm depends on the difficulty of projecting onto the model-sparse set. For the tree and group-based sparsity models we describe in this paper, such projections can be obtained in linear time. Finally, we provide numerical experiments to illustrate the theoretical results in action.

Related:



and the second paper: For-all Sparse Recovery in Near-Optimal Time by Anna C. Gilbert, Yi Li, Ely Porat, Martin J. Strauss

An approximate sparse recovery system in ℓ1 norm consists of parameters k, ϵ, N, an m-by-Nmeasurement Φ, and a recovery algorithm, R. Given a vector, x, the system approximates x by xˆ=R(Φx), which must satisfy ∥xˆ−x∥1≤(1+ϵ)∥x−xk∥1. We consider the 'for all' model, in which a single matrix Φ, possibly 'constructed' non-explicitly using the probabilistic method, is used for all signals x. The best existing sublinear algorithm by Porat and Strauss (SODA'12) uses O(ϵ−3klog(N/k))measurements and runs in time O(k1−αNα) for any constant α>0.
In this paper, we improve the number of measurements to O(ϵ−2klog(N/k)), matching the best existing upper bound (attained by super-linear algorithms), and the runtime to O(k1+βpoly(logN,1/ϵ)), with a modest restriction that ϵ lt (logk/logN)γ, for any constants β,γ gt 0. When k lt logcN for some c gt 0, the runtime is reduced to O(kpoly(N,1/ϵ)). With no restrictions on ϵ, we have an approximation recovery system with m=O(k/ϵlog(N/k)((logN/logk)γ+1/ϵ)) measurements.




Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment