Pages

Friday, February 08, 2013

Exact Sparse Recovery with L0 Projections

Ping Li just sent me the following:

Igor
You probably did not expect that we also write papers directly on compressed sensing, in addition to the prior work on data streams and hashing. This is the first paper on L0 projections: http://stat.cornell.edu/~li/Stable0CS/Stable0CS.pdf

Obviously there are plenty of possibilities of extending this work.

Best Regards,
Ping

Thanks Ping ! Ping also let me also know that he will make a matlab and mex file available later. 



Many applications concern sparse signals, for example, detecting anomalies from the differences between consecutive images taken by surveillance cameras. This paper focuses on the problem of recovering a K-sparse signal x ∈ R 1 N , i.e., K ≪ N and ∑N i=1 1{xi = 0 ̸ } = K. In the mainstream framework of compressed sensing (CS), the vector x is recovered from M non-adaptive linear measurements y = xS ∈ R 1 M, where S ∈ R N M is typically a Gaussian (or Gaussian-like) design matrix, through some optimization procedure such as linear programming (LP). In our proposed method, the design matrix S is generated from an α-stable distribution with α ≈ 0. Our decoding algorithm mainly requires one linear scan of the coordinates, followed by a few iterations on a small number of coordinates which are “undetermined” in the previous iteration. Our practical algorithm consists of two estimators. In the first iteration, the (absolute) minimum estimator is able to filter out a majority of the zero coordinates. The gap estimator, which is applied in each iteration, can accurately recover the magnitudes of the nonzero coordinates. Comparisons with two strong baselines, linear programming (LP) and orthogonal matching pursuit (OMP), demonstrate that our algorithm can be significantly faster in decoding speed and more accurate in recovery quality, for the task of exact spare recovery. Our procedure is robust against measurement noise. Even when there are no sufficient measurements, our algorithm can still reliably recover a significant portion of the nonzero coordinates. To provide the intuition for understanding our method, we also analyze the procedure by assuming an idealistic setting. Interestingly, when K = 2, the “idealized” algorithm achieves exact recovery with merely 3 measurements, regardless of N. For general K, the required sample size of the “idealized” algorithm is about 5K. The gap estimator is a practical surrogate for the “idealized” algorithm.
Let us note the ability to deal with the unknown vector in a streaming manner which is not the case of most other approaches to compressive sensing. Also while the algorithm is compared to OMP and LP, the authors also mentioned:

In this paper, our experimental study will focus on the comparisons with OMP and LP, as these two methods are the most basic and still strong baselines. Of course, we recognize that compressed sensing is a rapidly developing area of research and we are aware that there are other promising sparse recovery methods such as the “message-passing” algorithm [9] and the “sparse matrix” algorithm [13]. We plan to compare our algorithm with those methods in separate future papers.
Outstanding. I also like the site mentioned in 1.1 !


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment