Pages

Monday, January 26, 2009

CS: BOMP/GOMP/stGOMP/ReGOMP, New version of Sparsify, New release of DCS, Compressive Coded Aperture, CMP

The previous entry elicited some interesting comments and looks like a back and forth exchange in a review process. Thank you Kevin for responding!

In the meantime, Angshul Majumdar mentioned to me that within a few days the following algorithm implementions will be released through Matlab Central. While we are waiting for that submission process to go through, the algorithms are available on Nuit Blanche with Angshul's blessing: 

...Here are some greedy algorithms for Group Sparsity. Following the works of Eldar et al

[1] Y. C. Eldar and M. Mishali, "Robust Recovery of Signals From a Union of Subspaces", arXiv.org 0807.4581, submitted to IEEE Trans. Inform. Theory, July 2008.
[2] Y. C. Eldar and H. Bolcskei, "Block Sparsity: Coherence and Efficient Recovery," to appear in ICASSP 2009.

I have implemented their algorithm - Block Orthogonal Matching Pursuit (BOMP), and three of my own, namely:
  • GOMP - Group Orthogonal Matching Pursuit (similar to BOMP, except for decides the group by average correlation of each group instead of highest correlation)
  • StGOMP - Stagewise GOMP; combining ideas of BOMP with StOMP
  • ReGOMP - Regularized GOMP; combining ideas of ROMP and GOMP.
All these codes are accessible here. Thank you Angshul!

Thomas Blumensath has released a new version of Sparsify (version 0.4). From the Sparsify page ( the page still says version 0.3 but scroll down and you'll see the following):
21, January, 2009: New release of the toolbox, all that has changed is that I now included an automatic step-size determination method into the Iterative Hard Thresholding algorithm (hard_lo_Mterm).
This modification was announced in December 2008 on the Sparsity workshop in Cambridge and will be discussed in two upcoming papers [11] and [12], which I will make available soon.

[11] T. Blumensath and M. Davies; "Iterative Hard Thresholding for Compressed Sensing" to appear in Applied and Computational Harmonic Analysis
[12] T. Blumensath and M. Davies; "A modified Iterative Hard Thresholding algorithm with guaranteed performance and stability" in preparation (title may change)

Latest version:

sparsify_0_4.zip (1.1 MB)
sparsity 0.4 (1.4 MB)

Both of these codes and version numbers have been added/changed in the Reconstruction Section of the Big Picture.

The following important paper was submitted in 2005 but a new major revision has come out, it is entitled: Distributed Compressive Sensing by Dror Baron, Marco Duarte, Michael Wakin, Shriram Sarvotham, and Richard Baraniuk. The abstract reads:
Compressive sensing is a signal acquisition framework based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable recovery. In this paper we introduce a new theory for distributed compressive sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intra- and inter-signal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. Our theoretical contribution is to characterize the fundamental performance limits of DCS recovery for jointly sparse signal ensembles in the noiseless measurement setting; our result connects single-signal, joint, and distributed (multi-encoder) compressive sensing. To demonstrate the efficacy of our framework and to show that additional challenges such as computational tractability can be addressed, we study in detail three example models for jointly sparse signals. For these models, we develop practical algorithms for joint recovery of multiple signals from incoherent projections. In two of our three models, the results are asymptotically best-possible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take effect with just a moderate number of signals. DCS is immediately applicable to a range of problems in sensor arrays and networks.
Here is just an abstract about a new paper on another very interesting subject: Compressive coded aperture imaging by Roummel Marcia, Zachary Harmany, and Rebecca Willett. The abtsract reads:
Nonlinear image reconstruction based upon sparse representations of images has recently received widespread attention with the emerging framework of compressed sensing (CS). This theory indicates that, when feasible, judicious selection of the type of distortion induced by measurement systems may dramatically improve our ability to perform image reconstruction. However, applying compressed sensing theory to practical imaging systems poses a key challenge: physical constraints typically make it infeasible to actually measure many of the random projections described in the literature, and therefore, innovative and sophisticated imaging systems must be carefully designed to effectively exploit CS theory. In video settings, the performance of an imaging system is characterized by both pixel resolution and field of view. In this work, we propose compressive imaging techniques for improving the performance of video imaging systems in the presence of constraints on the focal plane array size. In particular, we describe a novel yet practical approach that combines coded aperture imaging to enhance pixel resolution with superimposing subframes of a scene onto a single focal plane array to increase field of view. Specifically, the proposed method superimposes coded observations and uses wavelet-based sparsity recovery algorithms to reconstruct the original subframes. We demonstrate the effectiveness of this approach by reconstructing with high resolution the constituent images of a video sequence.
I am sure I'll feature it more prominently when it comes out.

Finally, there is another greedy algorithm described in Complementary Matching Pursuit Algorithms for Sparse Approximation by Gagan Rath and Christine Guillemot. The abstract reads:
Sparse coding in a redundant basis is of considerable interest in many areas of signal processing. The problem generally involves solving an under-determined system of equations under a sparsity constraint. Except for the exhaustive combinatorial approach, there is no known method to find the exact solution for general dictionaries. Among the various algorithms that find approximate solutions, pursuit algorithms are the most well-known. In this paper, we introduce the concept of a complementary matching pursuit (CMP). Unlike the classical matching pursuit (MP), which selects the atoms in the signal space, the CMP selects the atoms in the coefficient space. On a conceptual level, the MP searches for 'the solution vector among sparse vectors' whereas the CMP searches for 'the sparse vector among the solution vectors'. We assume that the observations can be expressed as pure linear sums of atoms without any additive noise. As a consequence of the complementary actions in the coecient space, the CMP does not minimize the residual error at each iteration, however it may converge faster yielding sparser solution vectors than the MP. We show that when the dictionary is a tight frame, the CMP is equivalent to the MP. We also present the orthogonal extensions of the CMP and show that they perform the complementary actions to those of their classical matching pursuit counterparts.


Image Credit: NASA/JPL/Space Science Institute. Photo of Saturn from the Cassini spacecraft on Friday when this entry was written.

No comments:

Post a Comment